Defining Viral Growth for Mobile Apps

tree-roots

Virality is a contentious topic within the field of mobile app development because it is almost impossible to precisely instrument: due to the way most app platforms (such as the App Store) are architected, almost all viral mechanics lose scope as soon as a user engages with them.

Users invited into an app through a viral mechanic – such as an invitation sent via Facebook – are generally indistinguishable from organic users in most analytics suites. This isn’t necessarily a shortcoming of modern analytics software or a mendacious ploy by platform operators to keep metrics opaque; rather, it’s simply a reality of all highly complex eco-systems – that total quantitative cognizance is unattainable. Some uncertainty will always remain.

Of course, the difficulty in quantifying virality doesn’t negate its importance: mobile apps live and die by virality, especially in the App Store, given that recent ranking algorithm changes have put additional weight on download consistency, resulting in a “frozen App Store”.

Even virologists, for whom viruses and viral reproduction are an exclusive professional focus, don’t have a singular, all-inclusive framework for calculating viral growth. This is because viral effects are context-dependent; a viral infection in one host will spread differently than in any other host.

The same is true for mobile app penetration, especially as mobile apps creep ever further into the domain of popular culture. Measuring virality by way of converted viral mechanics is completely misdirected as Angry Birds theme parks open across the world and Psy plays Candy Crush Saga in a music video.

In other words, as the largest mobile brands expand their paid advertising initiatives beyond direct downloads into out-of-home formats, the lines between organic, viral, and paid downloads blur – not just for the companies spending exorbitant sums on marketing, but for all app developers.

The average smartphone user has 25 apps on their phone; as the largest mobile brands reinvest their profits into mainstream marketing, selling phones and breaking down stigmas about virtual goods purchases in the process, the entire app ecosystem benefits from increased exposure and adoption. But how can that effect be integrated into a virality calculation?

The answer is that it can’t – which means that making the distinction between organic vs. Viral downloads no longer contributes to a more informed understanding of viral growth.

Rather, all non-paid downloads can be categorized, simply, as growth: an additional windfall of users resulting from paid installs in a previous period. This vastly simplifies the models and equations that have been derived to predict viral user base growth into a simple calculation:

growth_equation_Mobile-Dev-Memo.com

Which can be modeled simply in a spreadsheet with a method like this (source spreadsheet here):

growth_mobile-dev-memo.com

Part of the problem with chasing virality down the precision rabbit hole is that an increased perception of transparency leads to an increased perception of accuracy, but with such fuzzy inputs, this isn’t the case.

Virality is difficult to capture in the interplay between specific user behaviors and product mechanics; taking stock of the overall growth of the user base is not only a far more auditable method of measuring virality, but it’s also less prone to being misled by spurious assumptions about how various features impact product adoption.

Ultimately, a k-factor coefficient is good for two things: discounting marketing costs by augmenting the number of users acquired by a set-budget campaign, and gauging the effectiveness of product feature iterations on user base growth (“growth hacking”). Derived eCPMs can very quickly be tested for accuracy: either viral growth materialized as expected or it didn’t, and this realization is made the day after a campaign has launched.

And product iterations focused on growth, using k-factor as a guide, rely on changes in k-factor to inform effectiveness; that is, a product iteration is evaluated based on the change in k-factor it instigated, not the absolute value of the k-factor resulting from the iteration.

It’s sensible, then, that a calculation of k-factor be done on a “top-down” basis of growth and not using a contrived calculation based on a number of inputs and assumptions. Not only is the top-down approach wholly contributed to by baseline metrics that are beyond question in most organizations (total users, paid users), but it’s faster, more easily explained, and more intuitive.