Surviving the mobile marketing winter: the Flight to Quality

For a primer on this topic, see: Surviving the mobile marketing winter

Faced with deteriorating market conditions — per my introductory article in this series, Surviving the mobile marketing winter — marketing teams should be recalibrating their workflows such that their measurement models best reflect recent changes in consumer behaviors and in-product engagement. This is often accomplished by paring ad spend back considerably, re-establishing early return-on-ad-spend standards, and scaling accordingly. What I don’t broach in the first article is an additional component of this renewal of the marketing intelligence apparatus: a flight to quality.

Adapting to economic weakness or a mid-term operating shock such as Apple’s App Tracking Transparency (ATT) privacy policy should inspire in a marketing team a sharper dedication to attenuating risk. Risk exists within multiple instruments throughout the marketing machinery and workflow:

  • Data risk. The data which informs measurement models is not representative of new cohorts;
  • Model risk. The measurement model cannot be relied upon to provide helpful, meaningfully-predictive return-on-ad-spend (ROAS) estimates or bid targets for various audience segments. Assuming the data on which a model is trained is credible (that is: data risk doesn’t apply), model risk could result from inadequacy of the model (eg. poor model choice), a lack of appropriate dimensionality (eg. using the outputs of a model fed with a global dataset to make bids on specific geographies), or data sparsity (see this article on predictive LTV models for more detail). It should be noted that model risk is a particularly pernicious operational headache: when product teams begin to question the validity or robustness of predictive marketing models, all sorts of political tensions can emerge that might quickly escalate into dysfunction;
  • Media mix risk. The composition of media across some set of channels, formats, and commercial objectives no longer supports the company’s broader strategy or its financial sensitivities related to cash flow.

I mostly cover the first two risks in the previous article in the series (and in this deck), but the third risk is worth unpacking. In The perilous mythology of Brand Marketing for digital products, I present a taxonomy of digital marketing efforts:

  • Direct Response marketing seeks to catalyze a consumer’s interaction with a product immediately;
  • Delayed Response marketing seeks to catalyze a consumer’s interaction with the product as soon as possible;
  • Brand marketing seeks to increase the consumer’s likelihood of engaging with a product when next presented with the opportunity to do so.

The difference between these tactics — Direct Response, Delayed Response, and Brand — is rooted in measurability and the immediacy of impact. A point I’ve made a number of times on this site is that brand marketing is not antithetical to performance marketing: brand marketing is a tactic and performance marketing is a measurement framework and broad commercial strategy. Performance marketing has no operational diametric opposite; I’ve never encountered a self-described ‘non-performance marketing’ team. Brand marketing tactics and campaigns should be measurable, but the tools used to measure them are unique to that tactic and fit into a broader mosaic of marketing measurement machinery.

But the goals of brand marketing are often indirectly related to generating revenue: to install brand awareness in an audience segment such that it will prefer that product when confronted with the need or opportunity to buy it. This differs from the direct response and delayed response tactics, which incite an inclination to buy that product either immediately or at the next possible opportunity.

It is this lag and this amorphous contribution to revenue that causes brand spend to be cut first when an advertiser is faced with a weakening economy or some other market disruption: brand spend is less of a near-term revenue driver, and its efficacy can less easily be measured with immediacy, requiring longer timelines and higher levels of speculative ad spend to analyze. So when economic uncertainty looms, a marketing team might engineer a flight to quality simply by shifting budget away from brand and towards direct response efforts.

Note that this media mix risk can exist when the other two risks don’t: if the data and the model used to predict performance are both credible and reliable, and the model indicates that overall advertising performance is deteriorating, then the media mix must change to prioritize the most immediately-impactful ad spend. This decision might make sense for a few reasons:

  • Cash flow is a consideration and the company must prioritize ad spend that delivers immediate-term revenue;
  • The company needs to optimize ad spend with a high degree of flexibility and with a rapid cadence;
  • The company wants to concentrate its ad spend in the highest-scale channels with the most sophisticated tools so as to reduce operational complexity and overhead.

Note that a flight to quality doesn’t merely imply a shift of ad spend from brand in the direction of direct response. Per the diagram below, a flight to quality could also necessitate a shift of ad spend within a particular marketing tactical category, such as moving up the “quality stack” within direct response to first-tier social networks.

Implicit in a flight to quality is the adoption of what I’ve called the waterfall method of budget allocation. In the waterfall method, the largest and most sophisticated channels are saturated before budget is allocated to smaller and more unrefined channels. Given this dynamic, it’s possible that a flight to quality results in the marketing budget increasing for some channels as the overall budget shrinks and concentrates: brand marketing shifts to delayed or direct response, but the largest channels grow in not only their share of the advertising budget but in absolute spend.

Photo by Markus Henze on Unsplash