In freemium mobile, my experience has been that the principles of the Minimum Viable Product as a product strategy are respected but sometimes necessarily abandoned because the concept isn’t perfectly transferable to mobile platforms. The MVP approach was designed for a platform (the web) that allows for the instant and universal distribution of product iterations, which is impossible on mobile. And hardware plays a larger role in the user experience on mobile (especially in gaming) than it does on the web, since the quality of handsets is so stratified.
So how can the MVP approach be adjusted to accommodate these platform incongruities? On mobile, isolating the changes in metrics driven by individual product changes is impractical due to handset diversity, the need for users to pull new versions (which results in a number of versions being active at any given point in time), and lag times between submission and availability on some app stores. Implementing one change to the product and pushing it isn’t prudent; users will be alienated by constant client downloads, and measuring the impact of that change could require waiting many days for the new release to be published.
For the MVP methodology to work on mobile, developers must stay apprised of a portfolio of metrics that speaks to a broader truth about the product. This is Minimal Viable Metrics: the minimum set of prioritized metrics that are tracked from the launch of the MVP and improved continuously through strategic, albeit less rapid, iteration. The Minimum Viable Metrics model integrates analytics into the product strategy and roadmap and asserts integrated minimal analytics as a launch requirement.
On mobile, I place the metrics comprising Minimum Viable Metrics into four broad categories: Retention, Monetization, Engagement, and Virality. While I think retention is the most important group of metrics, the order of the others might change based on the type of app being developed. I believe the prioritization of the iteration agenda – ie, what’s going to be worked on in an iteration, based on the current metrics – should follow the agile approach and prioritize those items which provide the highest ROI on a marginal basis.
In other words, the Producer / Product Manager should use a quantitative framework to estimate, on an incremental basis, 1) the extent to which all metrics are improvable in the upcoming product iteration and 2) the amount of additional revenue those estimated improvements would produce. The iteration schedule is then prioritized accordingly.
Day 1 – 7, Day 14, Day 28, Day 90, and Day 365 retention
When I say “Day X”, I mean the percentage of users that returned to the app on Day X. For example, Day 1 retention of 50% means that 50% of users returned to the app one day after installing it.
As I’ve written before, I consider retention to be the most important metric (or, rather, group of metrics) a mobile developer tracks, for two reasons: 1) retention allows for the calculation of the estimated “lifetime” part of the Lifetime Customer Value metric, and understanding that metric (or at least making an informed, realistic attempt at estimating it) is the only way to conduct ROI-positive user acquisition; 2) retention communicates “delight”; it is a measurement of the degree to which your app fulfills its fundamental use case. There’s no reason to attempt to improve upon other metrics if retention is low.
I calculate retention on an ex post basis; that is, I calculate Day X retention for the day on which users registered. So if 100 users join today, 50 return tomorrow and 40 return the day after next, the Day 1 and Day 2 retention metrics reported for today would be 50% and 40%, respectively. In this sense, the metric is “backward-looking”. This approach makes tracking improvements relative to new iterations (or feature launches) easier, as the developer can track exactly how retention-day metrics improved after a specific point in time.
I track Day 28 versus Day 30 retention because Day 28 captures weekly cyclicality in use, which can sometimes reveal interesting for certain days of the week. I plot all of the retention metrics on the same chart as line graphs but make them individually toggle-able. I also orient the metrics from today, so as the X axis approaches the current date moving left to right, the higher metrics drop to 0 (because eg. calculating Day 7 retention for yesterday is impossible).
I discussed strategies for improving retention in this article, but at a high level I believe strong retention is achieved through communicating quality and depth in the early stage of app use, mastering a repeatable, engaging core loop in the middle stage, and providing enough content to satisfy the most enthusiastic of users at the late stage.
Daily Active Users. The number of people that use an app on a given day.
Daily New Users. The number of people that install and open an app on a given day.
I track revenue on a daily basis and segment revenue from in-app purchases and revenue from advertising. I visualize this with a stacked line graph.
Average Revenue per User. I report this on a daily basis and calculate it as total daily revenue divided by the number of users that played that day (DAU). When tracked by day, ARPU also represents another commonly referenced metric, ARPDAU, but obviously these two metrics diverge when ARPU is calculated over a longer time horizon.
Average Revenue per Paying User. I report this similarly to ARPU and calculate it as total revenue divided by the number of users that made in-app purchases.
Tracked on a daily basis, the percentage of users that made in-app purchases. I do not report advertising “conversion”; the conversion metric only takes into account in-app purchases.
Average and Median Session Length
I track median session length because I prefer to not remove >3 sigma values from the data set, as declining / increasing average session length is a good leading indicator of changing “power user” engagement. This is reported by day.
Average and Median Session Count
Tracked and visualized similarly to length.
K-factor is the average number of additional users each user introduces to the app. This is very difficult to calculate for apps because mobile platforms drop almost all indicators of source prior to reaching the store. But I think estimating k-factor is important because virality increases the ROI of paid acquisition. I have previously outlined a strategy for estimating k-factor and published a virality model.
If an app is integrated into a social platform like Twitter or Facebook (which, if appropriate, it probably should be), tracking social invites is fairly straightforward. This article identifies some missed opportunities in Vine’s launch / early growth strategy that a mobile PM should take pains to not repeat.
Selling Minimum Viable Metrics
The most onerous aspect of integrating metrics into the MVP development framework may be selling the idea internally; it can be an unpopular proposition in small teams, wary of “big company” bureaucracy impeding the creative process and diluting the vision of a truly disruptive app.
But to that I believe the best response is simply that methods can (and should) be disrupted just as product verticals are; the lean start-up methodology is effective when applied to the web but requires adjustment to be practical on mobile, given the fundamental differences between the two platforms. Mobile development requires a greater reliance on data and a more tempered utilization of intuition in the design process.
The point of this post is to provide a starting point for fleshing out the analytics initiative of a mobile MVP. To that end, I have also created this dashboard template (source code here) to provide some visual cues regarding layout and chart formatting. The data is all randomized, but production data can be used by editing the function that calculates data in the template.