When faced with degrading marketing performance metrics, one common rationalization that mobile marketing teams provide is that a given channel (source of installs) has been “saturated”; install volumes on a given channel drop as everyone for whom the app is relevant has been reached and ad impressions are being shown to the same people, over and over again.
This defense is seemingly plausible on its surface, especially in the later stages of the mobile marketing lifecycle, but there are two problems with it. The first is that the largest mobile advertising channels reach hundreds of millions of people per day; it’d be impossible for all but the very largest of advertisers to saturate these channels to the point where ad performance degrades because there are no people on that channel to whom ads can be shown for the first time. And so while most campaign performance does degrade over time, blaming “channel” saturation is a bit evasive: channels can’t be saturated, but segments of relevant users within a channel’s reach can be.
If a campaign’s performance is degrading significantly, it’s more likely because either the app doesn’t have broad enough appeal to create a large total addressable market or the ad creatives being used aren’t resonating well with the app’s target audience than it is because the ad channel has been absolutely mined for users. Exposing this difference may seem like a semantic quibble, but it’s not: drastic performance degradation has nothing to do with the channel (assuming it is a large, mainstream one), and so citing “channel saturation” is an exercise in blame-shifting that doesn’t address the real problem with the campaigns.
The second problem with the “channel saturation” defense is that it fundamentally ignores the mechanics of mobile advertising channels. No ad platform will expose a product’s ad to every single user it reaches. Ad networks are cognizant of the fact that the CPMs they are able to extract from advertisers are dependent on the “quality” (relevance, monetization potential) of the people to whom they expose ads, whether the desired result of that ad exposure is some action (click, install, in-app activity, etc.) or mere brand reach. So ad networks are incentivized to show the most relevant ads that they can, even if an advertiser is bidding on CPM.
In the case that an advertiser is bidding on CPA (for instance, using Facebook’s Value Optimizer campaign option to target the users most likely to make in-app purchases), the ad network uses its vast trove of relevance and historical performance data to pinpoint users who will be most receptive to an ad. If that particular ad doesn’t perform (in the VO context: as users don’t install the app, or they do but don’t make purchases), the ad network will de-prioritize those ads in favor of ads that do perform and simply give them very few impressions. Even a bid increase (or, again in the case of VO campaigns, a budget increase) won’t likely resuscitate the campaign, since ad resonance leading to the action being bid against is a precondition of the budget being spent.
This is the “quality vs. volume fallacy“: there is not a tradeoff between the “quality” of installs driven by a network and the volume of those installs, and oftentimes the apps driving the most installs on a network are getting the most “high quality” users and paying less for them than competing apps. It’s not uncommon to hear about the developer for some broadly-appealing app with compelling ad creative spending an impressively low amount of money per install on a channel at a massive scale that recoups very quickly. The math is simple: if an ad is performing very well at driving conversions versus a competitor, its advertiser gets a huge discount on the action price (eg. CPI) because of the increased conversion probability while also having its ads shown more. The CPM both advertisers pay might be the same, but the better performing ad gets far more impressions, converts better on those impressions, and drives more actions at a lower cost per action.
And none of the above has anything to do with any specific channel; in the case of the unfortunate advertiser above, explaining that the “channel has been saturated” might appease an inquisitive manager but it won’t make any progress against the developer that is crowding out its competitors by absorbing a disproportionate number of impressions (from the outside looking in, this seems to be what is happening in the Calm vs. Headspace conflict). The “saturation” explanation for poor campaign performance isn’t a strong one, and user acquisition managers should be reticent to use it before thoroughly examining some of the more fundamental aspects of their marketing campaigns.