Leading up to the launch of a mobile app, a developer might construct some form of a model to best deploy the budget they have allocated for launch marketing campaigns (or, conversely, to back into a launch budget via some revenue targets). This model is likely to include a number of assumptions around the number of incremental installs (“uplift”) that will be generated if the game is featured and / or if it reaches a visible position (eg. 10, 6, etc.) on the Top Downloaded platform charts.
It’s tempting in assigning values to those two assumed sources of installations to use data from the developer’s previous launches (or from services like App Annie or Priori Data) to quantify exact numbers of installs that featuring and chart visibility will generate. For instance, a developer might put together a table like the following for a previous launch in order to estimate the effects that featuring and chart position had, and to extrapolate those install effects to its upcoming launch (the numbers below are completely fictitious):
Assigning values to the installs generated by those two sources of visibility is an interesting exercise, but applying any such estimates to a future launch is fraught with problems, for three reasons:
1) App launches are impacted by exogenous factors that are almost too numerous and expansive to control for, eg. what other apps are launching in a given week, the season the launch takes place in (summer, Christmas, etc.). The conditions during one week can’t be assumed to match another week’s.
2) App store ranking inflation puts constant upward pressure on the number of installs required to reach a Top Downloaded chart rank, yet the number of installs generated by that visibility is probably under correspondingly downward pressure.
3) Featuring placements and Top Downloaded chart positions are essentially just ad placements generating ad impressions (albeit placements that come with the editorial testimony of the platforms). The conversion of an ad impression — especially for completely untargeted impressions, as any platform impression is — is highly dependent on the app’s appeal. No two apps can be expected to convert the same way in the same ad placement; a niche app can’t be assumed to generate the same number of downloads as an app with universal appeal, even when given the same placement and number of impressions (and as per reasons #1 and #2 above, the impressions are likely to vary).
Two alternative approaches to estimating the installs afforded by each of these platform placements (each with their own caveats) follow:
The first more individualized approach is to build a top-down model that starts with the number of impressions these placements will generate for a given demographic / platform combination. These impressions can be translated into downloads using the conversion rates experienced during soft launch:
Of course, this approach likewise depends on assumptions that are impossible to validate: the WAU (weekly average users) of the Featured and Top Charts pages within a given platform (which are not public, although the number of active devices for a given device / geography is easy enough to track down). This approach also assumes that ad placement conversion rates will be roughly the same in soft launch campaign placements and platform placements (featuring / chart ranking), which may not be the case.
The second approach is a hybrid between the above two: to compare the soft launch conversion rates of the apps and adjust the estimated installs by platform placement accordingly.
The downsides to this approach are that it is impacted by points #1 and #2 above, and that it assumes that soft launch conversion and install rates will apply to global launch platform placements. But this approach should at least directionally account for the differences in appeal of individual apps, which underscores the most problematic potential premise in launch planning: that all featuring placements and chart ranking positions generate the same number of installs.