This is the third installment in the Surviving the Mobile Marketing Winter series. For the previous two installments, see:
- Part 1: Surviving the Mobile Marketing Winter
- Part 2: The Flight to Quality
In the last installment of the Mobile Marketing Winter series, I outlined three risks that a performance marketing team faces when confronted with a systemic shock, either economic (eg. a recession) or related to the broader operational environment (eg. Apple’s App Tracking Transparency privacy policy or proposed privacy legislation). I identified those risks as:
The first two of these risks present thorny, analytical challenges.
Most performance marketing teams operate against very thin margins on compressed timeframes, informed by a return-on-ad-spend (ROAS) curve. This curve captures the progression of cohort profitability over time: when a cohort is acquired, what is the timeline over which it generates revenues relative to the money that was spent to acquire it, and when does cumulative revenue eclipse the cohort’s acquisition cost (CAC)? This curve is presented on a percentage basis over time, and it often looks something like the diagram presented below. In this diagram, the measured ROAS for multiple hypothetical cohorts is shown, each indexed from the day of acquisition:
Each point on the curve captures the observation of cumulative revenue generated by a given cohort up to that point in time, divided by the cost of acquiring that cohort. Media buyers — the people living inside of Facebook Ads Manager and other ad platform interfaces every day — are the primary customers of this curve.
Note that the diagram above might sit in a media buying team’s dashboard, and it simply informs them of cohort performance relative to that cohort’s date of acquisition. This is interesting information, but it’s not necessarily actionable: some cohorts are better than others, but what’s the standard for performance? How should these curves inform the team’s work? This is where a marketing data science or analytics team enters the picture: the media buying team needs to know what quantifiable ROAS standard it should achieve in order to produce profit (that is: ROAS of more than 100% within some timeline) with its ad spend.
The media buying team relies on the marketing analytics team to produce a model that forecasts cohort revenue to some time-based endpoint using early cohort performance data. This is often called an LTV (lifetime value) model, and I’ve written about this concept extensively. The LTV model will forecast revenue (and therefore ROAS) to some pre-determined cohort age (eg. 90, 180, etc. days from acquisition) from very early revenue data, and the model output might be presented like the simple data table below.
The media buying team uses these projections to adjust campaign settings within the context of a profitability requirement that is often assigned to them by the executive team; for instance, ROAS of 110% by Day 90. The team adjusts bids, adjusts campaign targeting, and retires or flights new campaigns based on the ROAS predictions generated by the LTV model for recent cohorts.
An LTV model uses some set of trailing cohort monetization data to generate estimates. The longer the performance timeline required by the executive team (eg. ROAS of 110% by Day 30 vs. Day 90), the more trailing data is needed, which is obvious. But establishing credible estimates for various time-based waypoints on the predicted ROAS curve requires a great deal of data, simply because later-stage retention for cohorts is often very low relative to a cohort’s initial size. If a cohort of 1,000 users is acquired, but Day 30 retention for the product is 20% and Day 90 retention is 10%, then only 200 users from that cohort reach Day 30 in the product and only 100 reach Day 90. I walk through the realities of this in How much data is needed to predict LTV?
In order to get around data volume limitations, a marketing analytics team might simply fit a curve against observed ROAS waypoints up to some point and then project that curve out to the prescribed ROAS timeline (eg. sufficient data exists for robust measurements through Day 30, and that curve is projected out to Day 90). Doing this obviously relies on assumptions of monetization behavior; that they follow some pattern of engagement with the product, at the level of the cohort, that can be captured in a generic or composite, bespoke curve function.
For clarity, a media buying workflow might look like the following:
- The executive team at a company establishes a performance standard for the marketing team of 110% ROAS by Day 90. This means that ad spend deployed on any given cohort must produce a 10% profit margin by 90 days from its acquisition;
- The marketing analytics team builds an LTV model that establishes a cumulative monetization curve based on historical cohort data. When cohorts are acquired, their to-date cumulative monetization data is input into the curve, and a Day 90 ROAS estimate is produced for each cohort. These estimates might be updated daily for cohorts as they progress (eg. for each additional day the cohort exists, its Day 90 ROAS estimate is updated, utilizing new monetization data);
- The media buying team uses recent cohort predicted performance to adjust its ad campaign settings. If the cohorts acquired recently (last few days) exceed performance requirements, bids are increased to more aggressively acquire users at higher prices (which would presumably result in more users being acquired at a lower ROAS). And the opposite is true: if recent cohorts fall behind performance requirements, bids are decreased.
This generalized workflow is functional and allows media buying teams to react quickly (within a few days) to changes in the advertising market. But it assumes that consumer behaviors are mostly predictable: that new customers will monetize or retain similarly to old customers, and that the only real points of differentiation between cohorts are the prices paid for them. These assumptions obviously don’t hold in a period of economic uncertainty, or when the advertising ecosystem experiences a systemic shock related to platform policy. In these cases, models derived from historical data need to be reconstituted: consumer unit economics can change fundamentally. And there are two very specific manifestations of this that break customer monetization models:
- New users don’t behave like old users;
- Old users don’t behave as predicted when they were acquired.
Both of these developments are problematic and require that the model that informs the economics of new user acquisition, but also of expected ongoing revenue generation for existing cohorts, be re-evaluated or rebuilt completely.
And this is a considerable undertaking that almost certainly requires cutting advertising spending. I present one strategy for achieving this in It’s time to retire the LTV metric: reduce campaign bids to some minimal level, establish new ROAS standards at early-stage waypoints (eg. Day 3, Day 7, etc.), and systematically increase bids as each of those ROAS waypoints allows at a profitable standard.
This is a slow process, and it necessitates reducing ad spend in a way that almost certainly reduces revenue: if advertising is being undertaken methodically, then it is a direct, lopsided input to product revenue (ie. one dollar of ad spend produces more than one dollar of revenue). But if general economic health deteriorates in a recession, or if consumer behaviors are meaningfully altered over time (eg. post-COVID spending patterns), then this model recalibration project simply can’t be avoided.
Photo by ÉMILE SÉGUIN on Unsplash