This is a guest post from Pierre-Luc Simard, CTO of Mirego. Mirego designs and develops innovative mobile strategies for iPhone, iPad, Android and other mobile devices.
When building a new mobile application, the importance of measurement can often be overlooked. Developers typically get ahead of themselves by jumping right into thinking about features and capabilities without tying them back to overarching goals or objectives.
One of the first things you want to ask yourself at the beginning of any mobile project is, “What would make this app a success?” With that answer in mind, you can define clear objectives and prioritize them, which in turn, will help you make decisions around when the product is ready to ship and what it’ll take to get to that point. This goal-defining process doesn’t have to be time-consuming or painful. Simply identify up to three key things the application must achieve once it’s in the hands of users, and you’ll be in good shape to get to that critical version 1.0.
Getting from objectives to a solid first version, however, takes a lot of energy. In fact, often more energy than initially anticipated. To make sure that all that work provides the highest return, I’ve found that mapping objectives to real measurements as soon as possible is very important. Thus, my own personal mantra as a mobile developer: “Measure early, measure often.”
To illustrate this measurement-driven approach, let’s use the Nike+ Running app as an example. Below I’ll walk through my own process of choosing metrics to measure this application’s performance as if it were starting development today.
Tying your overall mission to specific objectives
In the case of Nike+ Running, our mission would likely be something along the lines of, “to track the running performance for as many runners as possible.” Analyzing this mission, we can break it down into a couple of objectives.
First, the core of the mission is to track running performance. Without that, it’s not possible to fulfill the rest of the mission. The application must make it easy for users to start recording a run and to continue recording for the duration of said run. We can also deduce that the recording must be reliable. Thus, your primary objectives would be:
1. The easiest function for a user to reach should be recording a new run.
2. Ensure that data recorded is as accurate as possible.
The secondary part of the mission focuses on attracting and retaining as many runners as possible. To attract users, the UI should be rich yet simple and (when it makes sense) allow rich integration with a user’s social networks. This allows users to share their experiences through their social groups, so they can try out your app, too. Once a user does decide to try an application, the first interaction he/she has with it must be as painless as possible. Retention of the user is then achieved through their perceived value of the app’s core features. If the user sees no value in the application, no amount of reminders, push notifications or other gimmicks will drive engagement. So let’s add a few secondary objectives to our list:
3. The run must be shareable to allow application discovery through social networks.
4. The time from first install to first run recorded should be as short as possible.
5. The data collected must provide value to the user.
Now for the important part: Measuring success
After creating objectives for your application, you need to ask yourself how to measure its success when stacked up against its objectives. Typically one objective will require up to three metrics to accurately measure progress. To gather and report on those metrics, you need to install a software analytics solution such as New Relic to track how your app is performing out in the wild; an analytics engine like Google Analytics or Mixpanel to track user-focused metrics and demographic information or analyze marketing funnels; as well as a crash reporter like Crashalytics to capture and report on when the application failed. Together, each of these tools will provide a coherent view of the application’s performance.
Going back to the objectives we set out for Nike+ Running, we can start to map each one to the metrics that can be gathered by our collection of measurement solutions.
Metrics to collect for Objective #1
The easiest function for a user to reach should be recording a new run.
The first thing to look for when measuring the success of an application is a timed event that starts when the application was installed and when it reaches an important moment in its “core function.” In the case of a running application, that core function would be recording a new run. Calculating the time between application start and the beginning a recoding will tell you if the feature is easy to find and the steps involved to record (e.g. setting time caps, etc.) are user friendly and have good defaults. So the metric to collect here would be:
– Time from when application was first installed to when core function was used
– Wait time (i.e. screen render time) leading to recording a run
Screen rendering time is directly correlated with the user’s wait time. If we want the function to be easy to use, it should also be quick to use.
Metrics to collect for Objective #2
Data recorded must be as accurate as possible.
The data being gathered by the app is important to the user. After all, it’s this data relating to the user’s recorded run that gives them a sense of reward and accomplishment. Thus an application should measure how well it’s doing its job. In the case of a running app, tracking the quality of the GPS information gathered would be a logical metric to gather. You’d also want to track:
– Number of times the GPS accuracy was of low quality for a given recording
– Number of recordings that began with a low accuracy
– Time to reach good accuracy at beginning of a recording
– Number of failed attempts to save the recording back to the server
– Number of errors and incomplete transactions saved to the server
Metrics to collect for Objective #3
Run must be shared to allow application discovery through social networks.
Metrics here are pretty obvious. If you want to measure how effectively your app is being shared via social media, you’d likely want to collect:
– Number of sessions that published to a social network
– Number of outside page views from the social share
Metrics to collect for Objective #4
Time from first install to first run recorded should be as short as possible.
To track this objective, you want to be taking a look at two key events: the date when the user installed your app and the moment they recorded their first run. If you notice that these two numbers don’t match up, that’ll tell you that x number of people dropped off without actually giving your app a try. You can also measure how long it typically takes between the time someone first installs your app and starts to actually use it.
– Date/time when the user installed your app
– Date/time when the user starting using your app (in this case, to record a run)
– Amount of time spent in the application before recording a run (should be as short as possible)
Metrics to collect to Objective #5
The data collected must provide value to the user.
The more time your user spends interacting with the data your app provides, the more value you know it’s delivering to them. For this, you’d want to track metrics like:
– Number of times user looks at “distance ran” data
– Number of repeat shares on social media
Start measuring early
Once you know which metrics you need to track in order to measure the user experience, the next step is to define good baseline numbers for the application. For most new apps, there won’t be much data available to know what those numbers should be. That’s why it’s often helpful to look at other comparable applications or potential competitors. You can conduct your own little user experience study of competing applications to help you understand what should be measured.
After you’ve established that baseline, start recording as much data as you can, as early as you can. Getting the numbers from internal testers (first alpha, then beta users) will help inform what the numbers should be for launch and for subsequent releases. Also, gathering the data early will help determine if the data is being gathered accurately and will give you insight into what alpha/beta users tried to do in your app and what QA covered in the test plan.
Shipping is only the first step
Getting an application out the door and into the hands of real users is only the first step toward reaching app success. You need to monitor your app’s metrics throughout the product lifecycle, ensuring that, as new versions are being developed, they meet the application’s core mission and objectives. But in addition to that, you should also be monitoring your app’s performance to make sure it’s not causing user frustration. After all, some features may not be used simply because they are too slow and not necessarily because they are bad. Continuous measurement and monitoring will help you continuously improve your app as it progresses into more mature releases.