LTV is still a relevant and functional metric for marketers in 2019.
This guest post was written by Kate Minogue, who heads the EMEA Marketing Science team for gaming at Facebook.
Lifetime Value (LTV) — or more practically, Long-Term Value — should serve as a North Star for advertisers, both in helping them to understand the ultimate quality of their users as well as in giving them the confidence and freedom to acquire them profitably.
In the context of mobile gaming, and particularly IAP-monetized games, we are all too aware of the rarity of a paying user. A report published by mobile attribution company AppsFlyer last year showed that only 3.8% of those that play Mobile Games ever become payers. So, in User Acquisition, teams are challenged with casting the net wide enough to find a large volume of players containing sufficient purchasers to fund acquisition. If we add a further restriction that this group need to pay back fully in the short term, we will unnecessarily limit ourselves to a much rarer event, potentially missing out on a large volume of valuable players.
The recent article, It’s time to retire the LTV metric, discusses the merits of moving away from LTV altogether as it doesn’t gel with the way most companies operate today: the LTV metric we are used to appears outdated. But I believe that the LTV metric is not the problem, but rather the many failings in how companies are using it today.
Let’s start with one very important fact up front — and my favorite quote — to put this in perspective:
We look to LTV models to solve the challenges of acquisition and payback and when they don’t work, we try to make them more and more sophisticated. But they are still models. And still wrong. We need to focus on the final word “useful.”
In working with a range of advertisers on their Marketing Science strategies, we at Facebook have seen that most of the challenges in using pLTV effectively fall either into the fundamental definition of LTV, the model choice / accuracy, or a lack of flexibility of the chosen model.
How can the simple act of defining LTV be a challenge?
When building a model that predicts LTV, the first task for a team to undertake is to define what they are predicting and the appropriate time horizon for that quantity. If this stage of the prediction process isn’t handled with sufficient care and attention, then it’s likely that the rest of the project will not go smoothly. Three factors feed into the definition of LTV that need to be agreed to both by the team building the model (that has access to all the data) and by the teams that plan to use the model:
- What is value?
- When do we predict?
- What period do we predict for? (What is our “Lifetime”?)
The value question is becoming more complex with the rise of games with ad or hybrid monetization models. Some advertisers include an organic coefficient (or k-factor) in their value definition to ensure the full impact of user acquisition can be measured. This will make sense for some games and not others, but either way it is no easy calculation. The goal at this stage is twofold:
- Ensuring that all relevant value sources are being accommodated by the LTV model as accurately as possible;
- Ensuring that the marketing teams that will refer to this KPI are aware of any assumptions that have been made in defining value so that they are prepared when making trade-offs (e.g. adding a blanket 20% to account for ad revenue contribution).
As for the length of the time horizon component of the model, I always give the very unsatisfactory answer of, “What makes the most sense for your business?”. Take serious caution in any case where an external party (having not seen your data) gives you guidance on the timeline over which you should make a prediction. The key takeaway here is know your own user value curves: a best practice here is to predict as early as possible and continuously update as and when more data becomes available. The first prediction the team uses ought to be as early, and accurate, as necessary to make a good business decision. Lifetime is unlikely to be “forever” (nor would this be useful) — instead we want a duration that makes sense for our customer retention, monetization and, as has been mentioned on MDM many times before, cash flow. The “Advertising Recoup Evolution” approach that was outlined previously by Eric Seufert can be useful if you have a robust understanding of the relationship between early payback and what your company needs to drive profit and growth as long as it is not arbitrary, short sighted, or unnecessarily restrictive.
Model choice is key — so do we need all the bells and whistles of the latest developments in machine learning?
Model choice is key but that does not mean more complicated always equates to better! Different algorithms have different strengths (transparency, stability, handling of extremes) so you want the one that fits your specific use case and priorities.
We see a lot of models that use predominantly macro features such as demographic, device, or acquisition channel — a concern with these is that some of these alone do not generalize as well as in-game behaviors when external factors (such as marketing) shift. Take your acquisition channels as an example: first and foremost these will be dependent on your attribution methodology (often another “wrong” model). Second, this as a model feature can be a very blunt tool if you have a lot of different campaigns on a given channel or you are making a lot of changes. Communication between Product, Marketing, and Data Science is critical to uncovering cyclical relationships between the variables we have chosen versus identifying those that truly represent an indicator of future value.
The reality is that a simple model may be perfect for you if it achieves what it has been built to do. Ideally this would be one model, one source of truth, for the company, but we at Facebook have seen cases where that model needs to differ based on the differing needs of your internal teams. Although your finance team may have different requirements from your monetization or marketing teams if the period of interest, granularity, or accuracy levels required are different, what matters is how confident you are in the accuracy of the model. That means more than checking the aggregated accuracy score (if I sum all my predictions and all of my actuals the difference is X%) at the point of model build and assuming it will take care of itself. It is important to monitor accuracy over time, know this accuracy for different cohorts of users, and know what that means for you when deciding the “winning” strategy and where to invest your money.
So, we built a really accurate model, are we done?
You expect your business, product, and players to change over time and your model needs to do the same. The owner of your LTV model needs to be close enough to these changes to either futureproof the model to handle some of these changes or else to know when a change is significant enough to prompt a refresh or complete rebuild.
The reality is that a simple model may be perfect for you if it achieves what it has been built to do.
Aside from these underlying changes, you also need to be conscious of whether your model acknowledges the non-acquisition marketing activities that your teams pour blood, sweat, tears (and more budget) into. If you are investing in brand loyalty or retention marketing, can your model adapt to the changes you strongly believe this delivers in customer LTV? Or are you still relying on the value your analytics team reported at the time of acquisition.
Okay, we’ve read this far — what is the silver bullet?
This should come as no surprise, but an inaccurate LTV model won’t be fixed by one magic variable you haven’t thought of building in. Your best chance of success with pLTV measurement is true collaboration and open communication between all teams. All teams that understand your customer and can have any impact on their journey should play a part in the design and implementation of these models. Disconnected teams, models built in isolation by people that don’t understand the business, or teams that don’t understand the models they’re using will all lead to poor decision making and, if that leads to the demise of LTV, then you have truly thrown out the champagne with the cork.
Kate Minogue heads up the Marketing Science team for Gaming, EMEA at Facebook, where her team works with some of the largest Gaming companies globally to advise on their Marketing effectiveness and data science strategies. Prior to Facebook, Kate worked in Data Science for 6 years focused on Marketing and Customer analytics across Online, Retail, Gaming and Financial Services.