One theme that I’ve seen develop over the past 18 months within the mobile marketing space is the absorption of many product-focused considerations into the marketing function (this could also be phrased the other way; marketing is being assimilated into product). The explanation of this phenomenon relates to the in-app-event-driven algorithmic evolution of the largest mobile user acquisition channels, which has been detailed extensively on this site; more background can be found here, here, here, and here.
In this article on the future of growth teams, I discussed what I believe the “mobile growth” team of the future looks like: comprised of engineering, analytics, data science, mass-scale creative production, and media buying capabilities that owns budget deployment, measurement, and the testing of down-funnel, in-app product features that ultimately drives monetization improvements. At many mobile-first companies, that testing is done by a siloed “Growth” team that is charged with continuously running A/B tests designed to improve retention and engagement; this “product-led growth” team is usually a part of the product organization and does not interface with the marketing team at all.
This is a mistake. One of the most common questions I was asked after writing the above article (and others that make the same claim: that the new mobile advertising paradigm requires close collaboration between product and marketing) was: what does it mean for the product and marketing teams to be aligned? In practical terms, how can these two teams work more closely together?
The most obvious form for this collaboration to take is simply for the “product-led growth” team to be cognizant of the acquisition source of users when experiments are run. In general, A/B testing is a blunt tool that is easy to abuse to great detriment; I rarely see teams using A/B testing in a thoughtful, systematic way that supports long-term product growth. But especially as the performance of mobile advertising channels is defined increasingly by down-funnel, in-product events, growth teams charged with incessant A/B testing need to (at the very least) be aware of which channels have supplied their DAU. Not understanding the composition of the DAU-base can actively hurt growth.
As an example: consider an A/B test run across a user base of 1MM DAU in which Variant B generates a 16% increase in Day-7 ARPDAU over Variant A. The test would be seen as an unqualified success — the growth team just delivered 16% higher Day-7 revenue!
Now consider what happens if that DAU-base is broken out by acquisition channel, with Organic users (that is: users that came to the app organically) representing 35% of the user base and overwhelmingly preferring Variant B, versus all paid users, which prefer Variant A:
While it’s true that applying Variant B to the entire user base will generate a better outcome than applying Variant A to the entire user base, there are arresting reasons one might not want to do that:
- Presumably, Organic traffic is at maximum potential: since app developers can’t control the levels of organic traffic their apps receive, organic as a traffic source is always being supplied at the maximum possible level;
- How old is the Organic DAU in this picture? What if new Organic traffic has actually tapered off? Again, since developers can’t control the potential levels of Organic traffic their apps receive, a developer has no ability to increase organic volumes and generate new Organic users.
This is just one arbitrary example, and the numbers could be changed or edited to prove any given point. But the idea here is that by not including critical acquisition source context into the analysis of the A/B test, the developer isn’t taking growth potential into account. What if Facebook or Google UAC traffic could be increased — doubled, or even tripled — here with better monetization and thus higher bids on those channels? By choosing Variant B over Variant A, the developer reduces the ARPDAU from paid channels — the sources of users over which it actually has control — in favor of Organic users for which the developer can’t increase acquisition volumes.
If the Growth team were to take acquisition channels into consideration when conducting this experiment, they might notice the vast improvement over Day-7 ARPDAU for paid users and present that information to the marketing team, letting them know that paid channels will benefit from Variant A. The two teams could then jointly make a decision: since monetization improves for paid users with Variant A, perhaps Variant A should be applied because it will allow for more paid users to be acquired? At some point, the economics of the variants might then flip: as more paid users were acquired who monetized better, Variant A could outperform Variant B.
Of course, the optimal solution would be to personalize the app to users based on acquisition source: Organic users would see Variant B, and paid users would see Variant A. That kind of collaboration and cooperation — using acquisition source as a dimension of product personalization — is the ultimate form of Product / Marketing (“Growth”) alignment. But even just including acquisition source into growth experiments is a productive first step, and it’s the easiest way to force collaboration between the two teams.