
A problem that arises commonly for mobile marketing teams is saturation: an app’s ad creatives have been seen by so many people within a particular channel that they stop converting, as few people remain unexposed and thus the most pertinent opportunities — the “low hanging fruit” — have been exhausted.
When this happens, many teams begin the process of onboarding new sources of traffic to make up for the decline: they’ll run small test campaigns with new networks over the course of a week or two and then include them into their portfolios of traffic if the results of those tests are satisfactory. A lot of time, these tests are in the $5-10k range in terms of budget; enough to acquire, depending on the app, some thousands of users over the course of the test.
Often, these new networks are evaluated on the basis of the quality of the traffic that was provided over the test, independent of its install volumes — that is, user-level metrics are the only things considered. User Acquisition managers can be heard to say things like, “Network X provides great traffic, but at low volumes,” implying the existence of some sort of indifference curve between traffic quality (cohort LTV) and traffic volume that would make a team indifferent to high volumes at low quality versus low volumes at high quality.

This isn’t really true — at least, it’s not strictly true. Firstly, the relationship between price and volume in the mobile advertising marketplace isn’t rigidly inversely correlated: quality and volumes can move together in some instances.
Almost no ad network (that is, broker of non-owned inventory) can provide the same level of targeting sophistication as Facebook, but most attempt to proxy it via measuring click probabilities (of course, Facebook does this too, within its targeting scope). But ad networks usually get paid when a conversion happens, which means they’re not likely to show your ad if its click-through rates are low unless the delta between your bid and the average makes up for your CTR shortfall and produces as much revenue for them. So for apps with very low appeal, price and volume move in different directions: my volume of installs increases because I bid very high to increase the number of times my ad is shown, but since my app isn’t broad, the ad network begins targeting less relevant people and spamming my ad over and over until people convert. Thus, price increases and quality decreases.
But for broadly appealing apps, this doesn’t happen as easily: broadly appealing apps can scale their marketing (increase bids) and see their CTRs stay flat, and once they outbid their competitors for the most lucrative traffic, they increase the quality of their installs. LTV and bid price move together, and volumes increase because CTRs don’t drop drastically.
But the second reason the quality versus volume fallacy doesn’t hold up in mobile marketing is that there is a high opportunity cost to a mobile marketer’s time, and it takes about as much time to optimize a campaign on a network delivering high volumes as one delivering low volumes. So if there is an indifference curve between quality and volume, it should only exist for a team after some minimum yet substantial level of installs is being delivered by a network.
Also: whenever a network is added to a mobile marketing team’s traffic portfolio, it adds complexity to the reporting and analysis processes and creates an opportunity for something in the toolchain to break. And low-volume channels simply don’t often drive revenue: user economics are important, but so is absolute revenue.
The last reason why quality versus volume is a deceptive dynamic is that volume is really a necessary precondition to understanding quality: without much data, teams often can’t really even capably measure the quality of the traffic they’re receiving. In How Much Data is Needed to Predict LTV?, I walked through the difficulty of estimating LTV with even moderately sized cohorts: when a network is generating 20, or 50, or even 100 installs per day, the variance in the daily spend values of those users probably doesn’t allow for an actionable LTV metric to be calculated.

Generally speaking, a mobile marketer’s time is almost always better spent in optimizing campaigns on large networks, thinking through the product’s advertising positioning, experimenting with new marketing formats, or helping the product team to understand acquisition data in making changes to the product versus onboarding new, lower-tier direct response ad networks. As a team goes down the list of networks past the biggest, best-funded, and most widely used, the likelihood of those channels being able to deliver traffic at levels that make a difference for the business decreases dramatically.