The most fundamental concept in paid user acquisition, the Lifetime Customer Value / Cost per Acquisition spread describes how much profit each acquired user produces (the “spread” being the difference between the two values). But while the concept may be fairly simple to grasp, the actual calculation of these values can be nuanced. Many things contribute to how LCV and CPA must be derived to be comparable, which is how these metrics provide value. Remember: averages in the freemium model are useless. Comparing the LCV / CPA spread using average values does nothing except relate some theoretical profit margin which may or may not exist.
When evaluating the LCV / CPA spread, considerations must be made to how well LCV can inform the CPA bid. CPA bids are fairly flexible; most mobile ad networks provide for bidding based on at least geography and hardware type. So long as a developer can track users by acquisition source, it should have enough context to properly set a bid based on LCV.
But LCV segmentation is entirely a function of the developer’s ability to aggregate and de-aggregate revenue in its analytics system; it’s not flexible because it’s prescriptive. Since LCV determines CPA, the developer is fully responsible for putting the controls into place required to optimize acquisition spending. The following dimensions should be taken into account when calculating an LCV to determine CPA prices.
Some LCV models discount (i.e. reduce back to a present value based on a discount rate) revenue and some don’t. Whether or not discounting LCV makes sense in a commercial model depends on the period over which a cash stream will be active. Take World of Warcraft, for instance: given the chance to acquire a user with an LCV of $50 over a 10-year lifetime or an LCV of $40 over a 1-year lifetime for the same CPA (and assuming a linear, yearly stream of cash flows for both users), Blizzard would most likely choose the latter. Why? Because for these LCV metrics to be equivalent in present value terms, Blizzard would need to use a discount rate of 2.5104% — meaning their investable projects would need to yield an internal rate of return of that amount, which is quite low.
Given the long lifetime of the first player in the example, discounting the above lifetime values adds depth to the choice. If those lifetimes were 10 months and 1 month, respectively, discounting the lifetimes would likely fall in favor of the first user (because absent a very large discount rate, the differences between the present values and lifetime values would be small). The point is that discounting can change the decision dynamics of acquisition campaigns when user lifetimes are very long.
Comparing CPA to LCV is unreasonable unless both metrics represent the same demographic core. A handful of countries in gaming generally exhibit high LCV metrics, therefore facilitating higher CPA spending. When LCV is measured for a country in that list, then that country’s CPA should be allocated accordingly; setting a budget based on an “average” CPA would underinvest in acquisition for these countries, thereby placing an unnecessary ceiling on revenue. Similarly, that “average” CPA would result in an over-investment in locations where LCV values fall below the average.
Demographic behavior is highly dependent on the nature of the product, and LCV values can reflect a number of local realities — not just per-capita income. Culture may affect how an app is received, as might the degree to which it is localized (if it’s text-heavy). Without being able to measure LCV at the geographical level, a developer doesn’t know if it is throwing money at an unresponsive demographic.
LCV metrics change over time. Market dynamics play a big role in an app’s LCV: apps can become outdated, they can face competition of a higher quality, they can become obsolete (i.e. the use case no longer exists), their critical mass of users can dissipate, etc. An app released six months ago may not present the same value proposition to users as an app released today, due to external factors or the decline of the app’s relevance. LCV therefore must be dynamic; I measure LCV based on trailing 2-week ARPU and retention rates. A cohort-based LCV is useless because it can’t be used to make decisions germane to current market conditions.
The developer’s future plans should also be taken into consideration when forecasting LCV. A developer may decide to stop producing new content for an app at some point in the future, after which revenue will precipitously drop; the LCV calculation must reflect this change if new content plays a role in spending patterns.
The LCV / CPA spread is the basic building block of mobile marketing, but it’s not useful if it’s not supported by the analytics needed to render these two metrics relatable to each other. Averages are dangerous in mobile, especially in freemium mobile apps; without forecasting LCV based on the conditions identified above, a developer risks targeting (and budgeting for) a segment of user that doesn’t exist.