Mobile marketing is a dynamic field. From year to year, best practices and norms in mobile performance marketing can change dramatically — sometimes fundamentally — as technologies, user behaviors, and consumer sentiments towards ads change. This resource is meant to be a living document that provides guidance on approaching mobile marketing in the year 2020 across all aspects of advertising.
While this document may not go into deep detail on every single component of running mobile advertising campaigns, the goal of this article is to provide a general overview of the entire ecosystem as well as to lead the reader toward more in-depth resources on specific topics via the annotated, in-content links. This guide to mobile marketing is presented by Mobile Dev Memo and Eric Seufert.
Table of Contents
- Mobile marketing traffic sources: Self-Attributing Networks
- Mobile marketing traffic sources: Ad networks
- Mobile marketing traffic sources: Programmatic
- Mobile marketing ad formats
- Mobile marketing event optimization and algorithmic optimization
- ROAS optimization in mobile marketing campaigns
- Trends in mobile marketing analytics
- ROAS calculation for mobile marketing campaigns
- The importance of CPI for mobile marketing campaigns
- IAP LTV calculation for mobile marketing campaigns
- Subscription LTV calculation
- Ads LTV calculation
- View Through Attribution
- Click Attribution
- Mobile Marketing Fraud Prevention
- Incrementality measurement in mobile marketing
- Media Mix Modeling for mobile marketing
- Last-click vs. people-based attribution models in mobile marketing
Mobile marketing traffic sources: Self-Attributing Networks
In 2020, it’s hard to dispute that Google and Facebook (the powerful “advertising duopoly”) own the majority of mobile ad spend. Along with Pinterest, Snapchat, and Twitter, the Self Attributing Networks (so called because they operate their own ad attribution) tend to make up a majority (if not a vast majority: 70%+) of any given advertiser’s spend on mobile.
Google and Facebook specifically capture this level of budget because they have the most complete and exhaustive data against which to target, which means they tend to outperform other channels. But Google and Facebook have also worked to route the vast majority of advertiser spend into “event-driven” campaigns, with the platforms optimizing campaigns automatically on the basis of users completing different events (more detail: how event-driven campaign optimization works). The particulars of how this impacts campaign management will be discussed in the Campaign Management section.
With Google’s UAC campaign format, the specific placements that an advertiser buys impressions into aren’t made available to deliberately target, with that optimization being handled by Google’s own algorithms. Advertisers have more freedom with Facebook’s algorithmic, event-based campaigns like VO and AEO and can pick the placements in which they want their ads to run.
Mobile marketing traffic sources: Ad networks
A secondary source of mobile traffic, primarily for mobile games advertisers, are ad networks that broker SDK traffic between publishers and advertisers. In this category, it’s really Applovin, UnityAds, Vungle, and ironSource that own the majority of spend. These companies are investing heavily into infrastructure that can help them compete with Facebook and Google on the basis of event-driven campaign optimization, but from a targeting perspective, they have access to a much more limited breadth of data on which to base targeting decisions. Working with an ad network tends to involve working closely with an account management representative who handles low-level campaign management work for the advertiser, although all of the aforementioned networks have self-service portals that allow advertisers to directly manage their own spend.
Ad networks will generally volunteer to create ad creative for advertisers running large enough budgets. Since the top ad networks specialize primarily in rewarded video ads, this can be a compelling benefit, as video ads can be expensive and time consuming to produce. And for the networks that manage playable ad inventory, advertisers might also be able to secure help with playable ad production. Additionally, depending on the size of the advertiser’s budget, some ad networks will offer advertising credits to publishers on the basis of the amount of inventory they serve with that network — that is, if the publisher is also an advertiser, the company can get an advertising credit related to how much money they generate for the network as a publisher. These types of arrangements aren’t usually made public, so terms depend on how important of a client the publisher is to the network.
Mobile marketing traffic sources: Programmatic
Programmatic inventory is an exciting growth area for mobile marketing in 2020: mobile programmatic advertising spend is growing rapidly, and many advertisers are looking to programmatic as a means of diversifying their advertising channels away from the Self Attributing Networks, especially Google and Facebook.
Advertisers can buy exchange inventory via a mobile DSP, and it’s becoming increasingly popular for advertisers to build their own in-house bidders to connect directly to in-app traffic on exchanges like MoPub, Google Ad Manager, Oath, Index, Smaato, etc. The mechanics of working with DSPs and building proprietary bidders goes beyond the scope of this article, but this post on QuantMar explains DSPs within the context of mobile in more detail: What is a DSP?
Mobile marketing ad formats
In 2020, video is the dominant ad format on mobile, although other formats like full-screen interstitials and banners are still viable. On Instagram, for instance, while videos can be placed inside Stories to great effect, the in-feed static square placement still represents a healthy portion of overall platform spend. Advertisers need to incorporate a wide array of formats into their portfolios in order to operate efficiently at scale: while video may be the single largest format as a percentage of marketing spend for many mobile marketers, other formats offer opportunities for increasing reach.
One interesting format for mobile games that has gained traction in the past few years is the playable ad: an HTML5 mini-game ad that launches into full screen and allows the user to play a small portion of the game being advertised. Playables tend to drive strong monetization since users that install from playables have already experienced the game that was advertised to them. Playable and video ads require the most content inputs and thus are generally the most expensive and time consuming ads to create, which is why many advertisers continue to produce static image ad creatives.
Mobile marketing event optimization and algorithmic optimization
As mentioned previously, the mobile marketing landscape in 2020 has moved dramatically in the direction of event-based and algorithmic advertising optimization. What this means is that an app install is no longer the conversion event that advertisers are trying to optimize their campaigns towards; advertisers are identifying events within their apps, such as payments, and telling their traffic partners to find the traffic that maximizes those. When advertisers set bids, they bid against the completion of those in-app events.
This fundamentally changes the type of traffic that advertisers receive: event-based campaigns, especially when those campaigns are optimized toward monetization events, tend to deliver lower volumes of higher-monetizing installs. The idea with this is that the advertiser is bidding against a monetization event (or engagement event) that should proxy very well for a high magnitude of total monetization: the advertiser is willing to bid more for better-monetizing users.
Algorithmic optimization is the way the traffic partners — and for event-based campaigns, this is mostly Facebook and Google — determine how to target users that are likely to complete the events that campaigns are optimized for. Facebook and Google do this by assigning very many different qualifying variables to users on the basis of their behaviors and demographic data; when an advertiser’s campaign has acquired enough users that have triggered the in-app event, Google and Facebook look for patterns across those qualifying variables and try to parse out a profile that is common to them. The platforms then look for more users that look like that profile.
This is similar in practice to how Facebook allows advertisers to create “lookalike audiences” from custom audiences that they upload to the platform. With event-based optimization, Facebook has essentially turned lookalike audience creation into an automated, continuous process that it manages on the advertiser’s behalf.
With event-optimized campaigns, it’s critically important for the advertiser to experiment thoroughly with different in-app events to find those that serve as the best proxies for overall monetization. Some events are good signals of strong early-stage monetization, but when these aren’t vetted properly, especially with the Value Optimized (VO) bidding strategy, encouraging early-stage ROAS can plateau.
ROAS optimization in mobile marketing campaigns
With the event-driven, algorithmic paradigm for mobile marketing capturing more and more share of overall advertising spend, the general approach to measurement has shifted towards one that focuses exclusively on return-on-ad-spend (ROAS) over a set timeline. This compares to the approach that used to reign within mobile marketing, which was oriented more towards lifetime value (LTV): advertisers would calculate the average LTV of users from a particular source or with a particular profile and then set advertising bids based on that.
Since advertising costs at the user level trend up with algorithmic, event-driven campaigns (because advertisers are paying for down-funnel event conversions inside the app), LTV measurement has become subordinate from a strategic perspective to time-bound ROAS measurement, which lets the advertiser know — usually very early on — how well a campaign is performing in terms of spend recoup. While these concepts — LTV vs. CPI (cost per install) and ROAS — are effectively the same thing, the ROAS workflow generally attempts to make campaign performance evaluations much earlier (as in, based on Day 1 performance of cohorts).
Again, the LTV vs. CPI evaluation is the same thing as setting a ROAS standard; the difference usually materializes in the timeline over which ROAS is used as a signal that a campaign is working or not. The difference here is that unit economics are not thought about in the ROAS-focused optimization approach in strictly end-state terms (“what is the user’s ultimate LTV at some point?”), but rather, recoup of ad spend is measured on a continuous basis and proxy waypoints are used to measure whether a cohort will break even by some date.
Analytics and Analysis for mobile marketing campaigns
Trends in mobile marketing analytics
As recently as a few years ago, fully-cloud based, end-to-end mobile analytics solutions were popular with app developers — developers wanted analytics solutions that could quickly be integrated into their products to track in-app behaviors and metric benchmarks, such as retention and cumulative monetization. These platforms are easy to implement but tend to be difficult to customize: since user behavior and in-app data was segmented into a proprietary data format, developers had a hard time uniting that data with advertising funnel data, and so they saw advertising data and in-app data as two, segmented data sets.
That is changing now. Many developers, even small ones, are finding that logging in-app events, running some sort of aggregation ETL on those logs, and then storing the aggregates in Redshift or BigQuery is preferable to outsourcing all data collection and aggregation to a third party. Now, it’s common to see developers using some combination of data storage + data visualization software as their core product analytics suite, even at impressive scale. The benefit of this type of custom solution is that user data can easily be united with advertising data to form a more holistic view of the user funnel, from first ad view through to user churn.
This development, combined with the event-based bidding strategies that were described earlier, has led to a shift in thinking around mobile marketing analytics: specifically, advertisers are moving away from the prescriptive LTV paradigm (eg. deriving a singular LTV metric for classes of traffic) that was canon until recently and towards a Return On Ad Spend (ROAS)-centric evaluation of campaign performance. This approach is conceptually similar but functionally different from the LTV / CPI standard that is used to measure traffic quality.
ROAS calculation for mobile marketing campaigns
ROAS is a measure, on the basis of time, of the amount of money that has been recouped by a campaign by the users that the campaign acquired. For instance, if I spend $100 on media today and the users I acquired with that media contribute $25 to my product over the next 3 days, then my Day 3 ROAS is 25%.
As detailed earlier, a ROAS-centric campaign optimization strategy sees the marketing team setting ROAS targets with their acquisition channels and optimizing their campaigns against those. In order to determine those early-stage ROAS targets, the team needs to understand how cohorts deliver ROAS over time; this process of mapping a ROAS curve to cohorts simply measures daily ROAS levels and, as needed, projects them forward. Functionally, this is similar to mapping an LTV curve onto cohort performance, but the difference is that the specific value of cumulative revenue delivered by a cohort isn’t important, but rather the relative value of that revenue to acquisition costs. Since acquisition volumes and traffic quality can be highly sensitive to traffic prices, a ROAS-centric approach provides better performance guidance to media buyers when things change than an LTV / CPI approach.
Instrumenting and tracking ROAS at the cohort level doesn’t require any additional data integration beyond what an advertiser would already do to track LTV by cohort: the fundamental inputs of revenue and cost are the same. The difference with ROAS tracking is that what is being tracked for any given cohort over time is the derived return value that is comprised of cumulative revenue divided by spend; that curve projection should then instruct the marketing team as to when the campaign will break even.
The importance of CPI for mobile marketing campaigns
Given the elevated importance of ROAS optimization in campaign optimization, the Cost Per Install metric (CPI) has become less relevant to mobile marketers: again, with the ROAS-centric approach, exact values (of per-user cost or lifetime revenue) are less meaningful than relative recoup rates. CPI is still an interesting data point to track when operating campaigns since it provides insight into deliverability and ad conversions, but from a performance standpoint, it’s more efficient to measure campaigns against a ROAS target than by LTV / CPI parity.
Also, CPIs are more tightly coupled with install volumes than underlying recoup curves: decreasing bids by 10% (and thus CPIs) could result in a much larger reduction in acquisition volumes, whereas recoup curves, even when they fundamentally change with bid changes, can be tracked more easily over time than LTV / CPI dynamics.
IAP LTV calculation for mobile marketing campaigns
The cornerstone of many app businesses is IAP revenue — as such, a huge volume of literature exists on the web around deriving IAP LTV estimates for mobile apps. One current trend across mobile app advertisers in estimating LTV is to break the LTV timeline up into segments and combine models across segments as user behaviors in those segments change. Put another way: early users behave much differently than users that have retained for a year, so advertisers have begun to treat those different life stages as separate and build LTV models specifically for them.
As stated earlier, the LTV vs. CPI paradigm for measuring campaign effectiveness has mostly given way to a ROAS-centric measurement approach, but that doesn’t mean that LTV is not still central to most advertisers’ measurement programs. The difference is that LTV may not be labeled as such: an advertisers might talk about “Day X ROAS” as a relative value that takes into account the estimated Day X LTV of a cohort as well as its cost.
Subscription LTV calculation
With both Apple and Google incentivizing developers to monetize via subscriptions by decreasing the platform fee to 15% on subscription revenue after the first year, subscription mechanics have become incredibly popular as means of building regular, recurring revenue. Additionally, since subscriptions tend to be floated to users early on in the user experience — such as with subscription streaming apps and dating apps — the recoup periods for subscriptions tend to be relatively short.
Because it has become such a common monetization technique, subscription LTV models have generated a substantial amount of interest. Fortunately for mobile marketers, subscription LTV modeling is generally more straightforward than IAP LTV modeling, and the academic literature around subscription LTV models goes back much further, to the time of mail-in catalogs. Generally speaking, the main components of subscription LTV models are conversion probability (likelihood of any user purchasing the subscription) and renewal / churn probability (likelihood of the user ending the subscription or renewing it after it expires).
Ads LTV calculation
As with subscriptions, in-app advertising has increasingly created viable opportunities for revenue generation on mobile, meaning advertisers are searching for reliable ways to measure and project ads LTV estimates. Part of being able to do this with any sense of accuracy is getting valid, impression-level revenue data: luckily, mobile mediation platforms have started offering clients revenue data at the level of the impression, which helps advertisers aggregate ads revenue around dimensions that they can use for targeting.
Roughly speaking, ads LTV models look similar to IAP LTV models: users are segmented based on attributes that make sense for the purpose of marketing targeting, and cumulative revenue is tracked over time for those segments. With ads, though, the number of ads any given user engages with in a session has a massive impact on total lifetime revenue, so an ads LTV model might use the following inputs to make an estimate for a given segment: average sessions per day * average ad views per session * average CPM of an ad for that segment, projected forward based on that segment’s retention profile.
Mobile Marketing Attribution and Measurement
Mobile attribution and measurement is an incredibly important for marketers. Many large advertisers use composite marketing campaigns that comprise many different formats, including television advertising and out-of-home (in addition to mobile direct response). In order to properly measure the effectiveness of these varied forms of advertising, as well as to ensure that all advertising spend produces incremental revenue (that is, revenue that wouldn’t otherwise have been generated from organic or word-of-mouth installs), large advertisers are building sophisticated measurement models and experimenting rigorously with the ways in which they attribute app installs.
One aspect of mobile marketing that receives a significant amount of attention is view-through attribution, and the right window to use for attributing installs after a video view. View-through attribution refers to the time period over which an app install might be attributed to viewing (but not clicking on) a video ad. View-through attribution is an important part of multi-channel mobile marketing, since video is such a pervasive and effective means of reaching users on mobile. But viewing an ad is not the same thing as engaging with an ad, especially since most video platforms consider a “video ad view” event to only require actually watching the ad for a very short period of time (in some cases, just 3 seconds).
In order to only pay for installs that can reasonably be considered the result of an ad view, many advertisers default their attribution settings to 1 day for view-through attribution: this means that an app install will be attributed to a video ad view if the user installs the app after no longer than 24 hours. Longer timelines for view-through attribution will almost certainly inflate app install counts from video ad campaigns.
Similar to view-through attribution, click attribution describes the assignment of app installs to ad clicks generated from a specific ad campaign. Most direct response mobile advertising strategies are fundamentally organized around “last click” attribution — which awards the credit for an app install to the user’s last click interaction with an ad — so click attribution is a critical important part of measurement on mobile. Without proper click attribution in place, an advertiser will have a difficult time measuring the true performance and impact of campaigns.
A common window for click attribution on mobile is 3 days — that is, credit for an install is awarded to a campaign click within three days. This timeline is fairly extended, and some advertisers have experimented with shorter windows, but there’s a risk in using a shorter window: if clicks don’t produce attributed installs, advertising platforms may de-prioritize a campaign’s ads in a waterfall, and the campaign’s delivery will subsequently suffer.
Mobile Marketing Fraud Prevention
In an attempt to help advertisers capture more yield, the largest attribution providers have rolled out advertising fraud detection suites, which advertisers can use to detect fraudulent activity in their campaigns and either reject attribution for fraudulent installs or claw back spend on fraudulent traffic.
Attribution fraud is the most common type of mobile advertising fraud, and it takes many different forms. Here is a partial list of fraudulent advertising schemes that exist on mobile:
- Click spamming: a network or publisher generates fake clicks that are sent to an attribution service with known mobile device identifiers (such as the IDFA) to take credit for any organic installs made by those actual devices;
- Ad stacking: multiple ads are rendered in the same placement, with only one actually being visible; if the user clicks on the visible ad, that click event is captured for all of the ads that were rendered.
- Auto redirecting: some target “site” (the app being advertised) is hard-coded into an auto redirect in an ad placement or, most often, on a mobile website. When I user triggers the redirect, they are taken to the app’s listing in the App Store or Google Play without having actually clicked on an ad. If that user subsequently installs the app, credit is given to the direct;
- Creative misrepresentation: an ad placement is placed deceptively so as to cause the user to click on it without intending to do so (eg. the ad placement is made too small to recognize and placed next to a button);
- Metric smoothing: fraudulent traffic is hidden in metrics reporting by allocating the very poor performance from one fraudulent source across multiple sources. This deception “spreads” the poor performance of fraudulent traffic across many different publishers, making it look like many publishers performed somewhat poorly rather than one publisher performed egregiously poorly.
Advertisers should consider what level of exposure they believe they have to fraudulent traffic and price that risk accordingly. For most advertisers, fraudulent traffic isn’t as large of a concern as attribution providers (which have a vested interest in scaremongering about fraud, since they sell fraud detection tools) claim. For most advertisers, keeping a close eye on performance metrics and only working with the largest, most credible advertising channels should provide sufficient protection against serious fraud.
Incrementality measurement in mobile marketing
Incrementality is the measurement of how various activities contribute to incremental, or new, revenue. Think of this as a means of measuring the impact of advertising from the perspective of what would have happened had the campaigns not been run — would those revenues have found the product anyway? In 2020, with some advertisers having already reached billions of people and commanding large volumes of intent-driven brand search, incrementality is an important consideration: advertisers want to know if the ROI they are measuring is illusory or if their ad budgets are genuinely producing new revenue.
One approach to measuring incrementality that is gaining popularity is the Ghost Ads methodology — an advertiser runs campaigns on programmatic sources as normal but keeps track, through bid logs, of where a “ghost ad” for a public service announcement would have been run as a comparison group and then tracks how many of those users ultimately install the app. The approach requires a fair amount of work in setting up, but it can provide advertisers with an ongoing, systematic measurement of revenue lift from advertising.
Some other approaches to measuring incrementality are less robust, like using holdout groups from different cities to measure revenue lift from advertising (there are serious problems with comparing activity between cities) and start-top ad campaigns, where ads are paused and revenue decay is measured (brand awareness can decay more slowly than immediate ad-driven revenues), but in general it’s a good idea for large advertisers to be thinking about incrementality and how advertising spend impacts their product’s growth and revenue.
Media Mix Modeling for mobile marketing
Media mix modeling is another topic within mobile marketing that is heating up as large advertisers move beyond direct response and need systems and methods for measuring advertising impact. A media mix model attempts to measure the impact of a specific channel on overall marketing performance when multiple different types and formats of advertising campaigns are being run. Consider a company that is running radio ads, TV ads, and direct response performance ads on Facebook — how can they know if they’d be better off not running radio ads, or what the budget distribution should be across those three channels? A media mix model is a statistical tool to help an advertiser answer such questions.
One media mix model approach is to try to isolate the performance of each channel within the broader mix to estimate what overall performance would have been without it — that is, what the marketing picture would have looked like if that particular channel’s budget was $0. By building a picture of every possible combination of n-1 channels, the marketing team can understand what contribution any particular channel had and can use that understanding to allocate budget on a forward basis.
Last-click vs. people-based attribution models in mobile marketing
The same market forces that are driving interest in incrementality measurement and media mix modeling are pushing companies to think about “people-based” attribution: that is, thinking about advertising not on a cohort basis but at the level of the individual user and how those users came to install an app or make a purchase after being advertised to via multiple media formats.
People-based attribution functions on the same principles as media mix modeling: some mix of media formats, which may overlap and reach the same user multiple times, can be optimized to produce better results than relying on just one channel. With people-based attribution, that philosophy is flipped on its head, with each user being identified and “followed” across their advertising journey until they adopt the product: which format should be exposed to the user first? Last? How many times should the user be reached with an ad to optimize conversion cost? People-based attribution helps advertising teams think through and model the best possible combination of ad views to acquire relevant users.
About Eric Seufert
Eric Benjamin Seufert is a consultant, media strategist, and quantitative marketer who has spent his career working for transformative consumer technology and media companies, including Skype and Rovio. Eric runs Mobile Dev Memo, a trade publication dedicated to advertising and freemium strategy on mobile, and QuantMar, a knowledge-sharing site for performance marketers. Additionally, Eric authored the book Freemium Economics, which was published by Elsevier in 2014. Eric holds an MA in Economics from University College London.