The 10 Commandments of Mobile App Analytics

Posted on November 11, 2013 by Eric Benjamin Seufert

10-commandments-mobile-app-analytics

Mobile App Analytics has become commodified: dozens of services – many of which are free – can be quickly integrated into an app to instrument, capture, and report upon user behavior. Access to data is therefore not a competitive advantage; instead, app developers compete on the degree to which their analytics infrastructure facilitates user feedback into development.

For freemium mobile apps, the Minimum Viable Metrics are only useful for reporting. Successfully iterating during the development of a mobile app – especially in the soft launch stage – requires a robust feedback loop and a sense of product development workflow that allows data to be transformed first into insight and then into product improvements.

Implementing the basic structure of this feedback loop is more onerous than simply capturing data and reporting on broad product metrics (DAU, ARPU, etc.). The following 10 Commandments of Mobile App Analytics (like the 10 Commandments of Mobile User Acquisition) outline some basic tenets of working analytics into the mobile app development process.

I. You Will Thoroughly Track the First Session

Retention metrics are great for pointing out large holes in long-term product appeal, but the Day One retention metric can't explain why a user might have abandoned an app after only a few seconds. Perhaps the menu system is too confusing? The splash screen not impressive enough? The loading time too cumbersome?

The new users drop-off chart, which should track user behavior through at least the end of the product's tutorial but ideally through a few uses of its core functionality, is essential to understanding why users churn out of an app in the first session. For most apps, the largest single one-day decrease in retention takes place on Day One; understanding the points at which users churn out and addressing them with A/B tests therefore has the most potential in absolute terms for retaining users in the crucial first moments after an app is launched.

The new users drop off chart should track the number of users reaching various points in the app from launch based on events, not on time in app, and those events should be spaced frequently enough to deliver insight (ie. large drops between events mean the events are too spaced out). And the events should be tracked from app launch, not app load: long loading times can account for a substantial portion of early user churn.

II. You Will Segment Users Based on Behavior

All analytics packages can track users based on geography, platform, and – for the most part – device, but rarely do these dimensions lead to insight. Product-oriented analytics must be capable of segmenting users based on their behaviors in order to build models of engagement.

Top-line metrics aren't useful for product iterations, especially in freemium apps where the vast majority of users can never be expected to pay. In order to optimize the experience for the users with the greatest propensity to enjoy the product, those users must first be identified. Early behavioral indicators – such as the length of the first session, the number of sessions in the first day, the early use of social features, etc. -- should be used to model engagement across the lifecycle of the product and direct feature improvements.

Broad averages are very rarely useful in freemium. By segmenting users based on their interactions with the product – and predicting against those segments using very early indicators – analytics can be used to improve the experience for the users most likely to stay engaged for the long term.

III. You Will Own Your Data

App analytics tools and platforms remove the burden on developers of even thinking about data: actions are sent to the cloud via an API, and all reports and dashboards are available on the web. An app can benefit from powerful analytics tools without storing a single row of data on a proprietary server.

But this is dangerous. For one, the roster of competitors in the mobile-analytics-as-a-service space will very likely shrink significantly in the coming years: the space operates on extremely thin margins and very few participants will be able to reach the scale needed to sustain profit (with some very large recent entrants engaging in predatory pricing for the purposes of forcing small players out of the market).

While very few companies simply shut down without prior notice, and almost all analytics providers make the data they store available through an API, having to import a product's entire data catalogue in short notice due to a shutdown could put a serious strain on engineering resources at a company that hasn't thought through its own data storage infrastructure.

The second reason a company should store its products' data even if its analytics service provider exists in the cloud is that analytics companies often price their services based on the economics of addiction: small-scale data storage and processing is free, but as soon as significant volumes are being handled, the price of the service jumps dramatically.

It just so happens that a massive spike in data throughput likely correlates with explosive product growth – meaning building out an analytics infrastructure is probably last on a very long list of product improvements that need to be made to keep up with a surging user base.

This is how dependencies develop, and they can be expensive at scale. Data storage is important enough that it should be considered part of the minimum viable product; all products should store their own data (even redundantly) to avoid a scenario wherein that data is either lost forever in a service shutdown or held hostage by a product experiencing rapid growth.

IV. You Will Not A/B Test Endlessly

A/B testing is important, but it experiences rapidly diminishing returns. And A/B testing can create a culture of incremental improvement that fails to acknowledge fundamental flaws in product design and core experience. A/B testing should be seen as a tool that helps improve feature design, not an alternative to designing features.

Once A/B testing for a particular feature is no longer delivering at least 3-5% improvements on a target metric, it should be considered complete. The purpose of an A/B test is to avoid subjective design decisions that may fall victim to a product manager's idiosyncratic, personal tastes. Once a feature's design can be deemed neither extraordinarily offensive or appealing – just neutral – then tests of ever-slighter and more nuanced variants can be skipped.

A/B testing is a particularly dangerous rabbit hole that can mask serious flaws in an organization's product development workflow because it possesses all the attributes of legitimate, valuable work. But once A/B testing can be quantitatively proven to not improve the product, resources should be devoted to other aspects of development.

V. You Will Establish a Realistic LTV Timeline

Lifetime Customer Value is normally used organizationally to set user acquisition budgets on a per-user basis. But because the LTV metric is, practically, a projection, its calculation can be gamed by extending the timeline over which it is calculated. For instance, if LTV is estimated by projecting user-level revenue trends forward, an 18-month timeline will product a higher value than a 6-month timeline.

LTV isn't valuable if it is overestimated through an unrealistic timeline. Additionally, an extended LTV timeline introduces complications into the calculation process by calling into question whether or not it should be discounted: in general, assets that can't be converted into cash within 12 months are considered long-term assets and should be interest bearing to make up for lack of liquidity.

Very few apps can hold the attention of a sizeable portion of their user base's attention for more than a year. Establishing and communicating a realistic LTV timeline allows the metric to be used throughout the organization without concern over its validity.

VI. You Will Not Fixate on Industry Benchmarks

Positive results bias in business reporting leads to most quoted benchmarks being far more impressive than any industry's norm. As a result, the market information that does exist about app performance forms benchmarks that may seem impossible to compete with: millions of dollars in daily revenue, near-100% Day One retention rates, inordinate LTVs, etc.

Not only is fixating on such benchmarks not constructive, but app metrics quoted in the technology press are many times misleading at best and complete misrepresentations at worst. For one, most revenue numbers cited are gross, not net of platform fees (usually 30%). Second, revenue numbers don't reflect cost of revenue: a company may be grossing millions of dollars per day with an app at a loss because of aggressive user acquisition spending.

And third, retention statistics are notoriously easy to game: consider an app in which 60% of users return at least once in the three day period following first launch but for which the percentage of users returning exactly three days after launch is 25%. Which number is most likely to be reported as Day Three retention?

If an app's LTV exists above its per-user cost of acquisition at scale, how its metrics compare with those of competitor apps is irrelevant: it can grow. An app developer should focus on using its own metrics to direct product improvements, not those of its competitors. Aiming at too high a benchmark can inspire unnecessary and reckless risks to be taken to make up ground in a fictitious divide.

VII. You Will Hire an Analyst

Some analytics packages offer reasonably sophisticated statistical tools to aid in deep analysis of user behavior. But these tools are designed to meet the needs of a wide variety of diverse mobile product verticals: gaming, messaging, video, telecommunications, news consumption, etc. When an analytics tool works perfectly across all possible product types, it likely can't capture the peculiarities of any single product.

No data analysis or visualization product can outperform a dedicated (and skilled) analyst at producing insights about a product. Domain expertise – pattern recognition, outlier awareness, historical and market context – cannot be replicated in a software solution.

Additionally, most analytics tools only offer “first pass” insight into a particular anomaly or relationship – that is, they can recognize the existence of a correlation but aren't equipped to investigate and explain it. This is the raison d'être of an analyst: to illuminate and derive insight from relationships in order to drive product improvements.

VIII. You Will Clearly Define Your Metrics

Anyone in an organization with a copy of Excel and a working knowledge of SQL is capable of building a functional metrics dashboard. The reason dashboards are centralized is to reduce the barrier to information and thus the time needed to make decisions.

But some metric variants, even for very basic metrics – such as when aggregated around a very specific set of dimensions – can't (or shouldn't) be represented on a broad dashboard; in these cases, the Excel / SQL solution is often used to conduct quick, ad hoc analysis. But problems can arise when metrics haven't been clearly and universally defined in an organization; users with access to the same base data but with different notions of how various metrics should be calculated may end up causing confusion or, worse, working toward different goals.

Unambiguous metrics calculations – even for the simplest of metrics, such as DAU -- should be available for all employees to use in building out their own reports.

IX. You Will Not React Drastically to Real-Time Metrics

Real-time dashboards are mostly for vanity or error / crash reporting; outside of a trading desk, real-time metrics shouldn't be used in a business setting to make decisions of any consequence.

Apps, which sometimes experience very rapid growth through either viral sharing or large-scale user acquisition campaigns, are especially vulnerable to the temptations of real-time reporting. Watching a new users or daily revenue number jump substantially by the minute is exciting, but that thrill can quickly evolve into dread when the increases begin to slow. This is when panic sets in and rash, defensive decisions are made.

Strategic decisions shouldn't be made in reaction to changes in real-time metrics. Strategies take time to unfold, and the vicissitudes that manifest from one minute to the next aren't necessarily indicative of the trends that will emerge over the medium- and long-term. Capturing and reporting on (non-crash / bug related) data more frequently than once or twice per day is more likely to serve as a distraction than deliver actionable, credible insight.

X. You Will Learn From Your Mistakes (and Your Successes)

Small scale (3-5%) improvements to a product feature or process are incremental when implemented in reaction to an analysis; they can be far more substantial when they are abstracted into best practices, made available to the entire organization, and implemented proactively in future development.

If the purpose of analytics is to bridge insight to product development over iterations, its value is compounded considerably when that insight can be applied to future product development without having to wait for data to be collected. Some improvements are so basic and broad – such as the results of fundamental A/B tests on pricing, tutorial flow, UI placement, etc. -- that they can be assumed to be universally true.

When this is the case, establishing awareness of them throughout the organization can save untold time and effort in the future, allowing products to get to market faster and with higher quality.

Although the app ecosystem changes rapidly – as do consumer tastes – when an A/B test or product change is unambiguously and fundamentally beneficial, it should be recorded and promoted throughout the organization.

  • Tom Farrell

    Great piece. But (always a but!) I take issue with one line:

    "But once A/B testing can be quantitatively proven to not improve the
    product, resources should be devoted to other aspects of development".

    This sounds like asking someone to prove a negative, and worse it conflates the process (A/B testing) with what goes into it (competing variants). Those variants can be as radical or conservative as a developer chooses, and there is no point (and never will be a point) at which you can 'prove' that A/B testing does not improve a product.

    You can certainly prove (with A/B testing) that changing the color of a button, for example, will no longer improve the product, but that's a different argument.

    Having said that, I agree with your general point that a narrow focus on A/B testing can lead to developers ignoring the larger, more radical, and more 'creative' ways in which apps can be improved.

    • ESeufert

      Fair point. My argument had more to do with continual A/B testing of a single feature: that defining an arbitrary threshold for stopping (eg. A/B testing button colors is no longer providing at least, say, 3% differentiation between variants) helps avoid falling down the neverending A/B test rabbit hole. In general, I think *most* things are worth A/B testing at least once, but a line has to be drawn past which a feature is considered "done".

  • Tom Farrell

    Great piece. But (always a but!) I take issue with one line:

    "But once A/B testing can be quantitatively proven to not improve the
    product, resources should be devoted to other aspects of development".

    This sounds like asking someone to prove a negative, and worse it conflates the process (A/B testing) with what goes into it (competing variants). Those variants can be as radical or conservative as a developer chooses, and there is no point (and never will be a point) at which you can 'prove' that A/B testing does not improve a product.

    You can certainly prove (with A/B testing) that changing the color of a button, for example, will no longer improve the product, but that's a different argument.

    Having said that, I agree with your general point that a narrow focus on A/B testing can lead to developers ignoring the larger, more radical, and more 'creative' ways in which apps can be improved.

    • ESeufert

      Fair point. My argument had more to do with continual A/B testing of a single feature: that defining an arbitrary threshold for stopping (eg. A/B testing button colors is no longer providing at least, say, 3% differentiation between variants) helps avoid falling down the neverending A/B test rabbit hole. In general, I think *most* things are worth A/B testing at least once, but a line has to be drawn past which a feature is considered "done".

  • Jan Tillmann

    Great read, thanks for putting this together!

  • Jan Tillmann

    Great read, thanks for putting this together!

  • Alon Even

    Eric,
    Thanks for sharing this great post.

    I'd like to stress out the importance of measuring, understanding and improving the user experience within the app. Traditional mobile analytics platforms focus on numbers (key metrics) instead of reasons, so they don’t tell app developers & publishers the full story, and won't enable them to see how their users interact with their, what problems they experience and how to fix them.
    Obviously, improving the user experience is critical for increasing user engagement, conversion and in-app purchase. For this reason, I suggest to use a visual in-app analytics platform, that includes user recordings and heatmaps, that enables app developers to put themselves in the users' shoes and visually understand how to optimize the user experience. A good example is Appsee (http://www.appsee.com/features/user-recordings).

    Alon

  • Alon Even

    Eric,
    Thanks for sharing this great post.

    I'd like to stress out the importance of measuring, understanding and improving the user experience within the app. Traditional mobile analytics platforms focus on numbers (key metrics) instead of reasons, so they don’t tell app developers & publishers the full story, and won't enable them to see how their users interact with their, what problems they experience and how to fix them.
    Obviously, improving the user experience is critical for increasing user engagement, conversion and in-app purchase. For this reason, I suggest to use a visual in-app analytics platform, that includes user recordings and heatmaps, that enables app developers to put themselves in the users' shoes and visually understand how to optimize the user experience. A good example is Appsee (http://www.appsee.com/features/user-recordings).

    Alon

  • Alon Even

    Eric,thanks for sharing this great post.

    I'd like to list the eleven commandment, which I see it, is measuring the user experience within the app. Traditional mobile analytics platforms focus on numbers (key metrics) instead of reasons, so they don’t tell app developers & publishers the full story. They won't enable them to see and understand exactly how their users interact with their app and what problems they experience while using the app.
    Obviously, improving the user experience is critical for increasing user engagement, conversion and in-app purchase. For this reason, I suggest to use a visual in-app analytics platform, that includes user recordings, touch heatmaps and UI analysis reports. A good example for in-app analytics is Appsee (.com).

    Alon

  • Alon Even

    Eric,thanks for sharing this great post.

    I'd like to list the eleven commandment, which I see it, is measuring the user experience within the app. Traditional mobile analytics platforms focus on numbers (key metrics) instead of reasons, so they don’t tell app developers & publishers the full story. They won't enable them to see and understand exactly how their users interact with their app and what problems they experience while using the app.
    Obviously, improving the user experience is critical for increasing user engagement, conversion and in-app purchase. For this reason, I suggest to use a visual in-app analytics platform, that includes user recordings, touch heatmaps and UI analysis reports. A good example for in-app analytics is Appsee (.com).

    Alon