I’ll detail these errors below, but first, some idle thoughts on acquisitions. M&A is often referred to as an “art and a science.” But in my experience as an advisor to M&A transactions (read: my money isn’t at risk), the acquisition process is either an art or a science, and when one approach bleeds into the other in the process, it often spells doom for the project. Empirically, I’ve seen acquisitions be motivated either — that is, mutually exclusively — by vision or math. Vision-oriented acquisitions are often propelled by the sheer force of personality of their internal sponsors. A good example of this is probably the acquisition of Instagram by Facebook. I’m sure that spreadsheets circulated furiously in that process, but Mark Zuckerberg wanted to buy Instagram, and ultimately, the price wasn’t going to stymie the deal.
Acquisitions motivated by math unfold very differently. In these cases, the input of experts and the models built by hyper-caffeinated junior analysts are scrutinized until the relevant sponsors are either convinced by the overwhelming strength of the analysis (rare), find the models conceivable and run out of time in which they can ruminate (more common), or perceive competitive interest in the acquisition (most common). I find participating in math-motivated acquisitions enjoyable because they entail bringing an unreasonably large number of very intelligent people together to consider the commercial integrity of some union of companies.
But acquisitions tend to fall apart when one approach (vision or math) borrows tactics from the other. When vision-oriented acquisitions pivot into deep analysis around quantifying synergies, or when math-oriented acquisitions try to introduce soaring, lofty second-order benefits into the core decision model, then the thesis has been compromised and the likelihood of success (that is: the deal completing) plummets. Deals live or die by hypothesis validation.
Oftentimes I am engaged to advise on transactions to assess the fundamental health of digital products where one or more of the following theses about the target company motivates the investigation:
- Performance marketing can either be initiated or significantly expanded in order to unlock a considerable amount of value;
- Product and feature development, especially “live ops” type player engagement mechanics that run on a regular cadence, can improve the monetization of the product considerably;
- Bringing certain types of companies together can create synergies and economies of scale that produce efficiencies.
Each of these theses requires an understanding of whether or not the target company and its portfolio of products is growing. This is deceptively difficult, and it can be obfuscated in inconspicuous ways that confound analysis. Below, I outline two errors that are easy to make in conducting valuation analysis in the due diligence process.
Not evaluating new cohorts independent of existing cohorts
Consider this DAU graph:
The product’s DAU is growing rapidly, with a nearly linear growth curve. Such a graph would certainly intrigue a potential acquirer, but the aggregated DAU metric hides some potentially disastrous trends that are only visible when the DAU is broken down by contributing cohort month:
There has clearly been a degradation of product retention that is visible in the rapid evaporation of later cohorts. DAU is growing because the size of cohorts is growing, but cohort compounding isn’t taking place to the same degree it was for monthly cohorts one and two. Cohort three (yellow) almost completely vaporizes over time in the above graph, having been present in the product for just 4 months, and cohort four (green) contributes fewer DAU than cohort one by the end of the timeline. The company is making up for deteriorating revenue by onboarding more new users over time, either through marketing or some kind of accelerated organic discovery that is introducing qualitatively “worse” users to the product. The correspondent retention curves that deliver the DAU graphs are below:
I describe this phenomenon in Monthly user churn is a terrible metric (from which the graphs are adapted):
That’s a very vague clue to pursue. Growth models should be forward-looking on the basis of projected retention, not on top-level DAU changes that don’t consider the makeup of the user base over time. If the product team isn’t aware of what user retention profiles look like — broken out by geography, acquisition source, platform, and over time — then it can’t understand why slowdowns and accelerations of growth happen. And if growth isn’t understood, it can’t be managed.
In considering the growth prospects for this product, what should the valuation model take into account? The “up-and-to-the-right” DAU graph hides retention decay for later cohorts, but we know that DAU levels for earlier cohorts are roughly stable. What will new users look like? It’d be easy to take daily new user counts (because those are measurable in real-time) and apply a broad historical retention curve to them in order to project out future cohorts.
But retention rates averaged over the past 180 days will mask much of the deterioration that has happened recently and will be more heavily weighted by the older cohorts that, by definition, disproportionately contribute more later-stage retention data. The analyst needs to be able to isolate the new cohorts’ retention curves and project them forward, but again, with limited data. This is a nuanced but important case to make: cohort retention is degrading, so new cohorts won’t look like (or compound like) old cohorts.
Not dimensionalizing the operating model
The lingua franca of M&A analysis is Excel: the buyer’s team will construct an operating model of the target company to try to project revenues, DAUs, profit, etc. into the future based on some set of assumptions about how the acquisition unlocks value from the underlying asset being considered.
Excel models are powerful tools because they’re easy to distribute, interpret, and visualize. In Freemium Economics, I write:
Spreadsheets are corporate poetry; when constructed elegantly enough, they can be used to communicate sophisticated ideas to audiences who wouldn’t otherwise be receptive to details…The agreeability of a method’s medium may seem like a trivial benefit, especially when compared to a medium’s ability to produce granular results, but it shouldn’t be understated; information can’t be used to influence decisions unless it can be parsed and interpreted by decision-making parties. For better or worse, spreadsheets are a fundamental pillar of the modern corporate structure, and ignoring their existence doesn’t change anyone’s expectations about how data will be presented.
But Excel models limit the scope of analysis by forcing two-dimensional thinking onto a task that requires far more breadth of scrutiny: a product or company that could grow across myriad vectors driven by numerous inputs. I have seen Excel models that would make Alan Turing weep but that nonetheless simplified the output to quantified wishful thinking. Sometimes, this is the point: models are always fictional, and a model can make some course of action look mathematically defensible.
But unless a model’s granularity maps to the depth of the business, and especially the parts of the business that are most dynamic and either are changing or may change, the model is useless. A lack of dimensionality in a model — for instance, where DAU or retention or monetization values are quoted globally, for the entire user base — is effectively a proclamation that the business’ momentum will remain unchanged, and the dynamic that exists across the various components of the business that combine to produce revenue will also remain unchanged. This is rarely the case. In fact, a change in some dynamic is often the catalyst for the potential acquisition in the first place.
Ultimately, the minimum amount of dimensionality that I like to see in an operating model is captured in two categories of user characteristics that multiply together in a type of qualitative linear algebra to produce a broad set of user profiles: How was the user introduced to the product? and How does the user behave?
In the first category, a model should want to account for users sourced through paid acquisition and organic discovery, and within the paid bucket, across which channels (and on which platforms) users were acquired. And in considering paid acquisition, it makes sense to group users into geographic buckets, since geography will define growth prospects. And then with those profiles defined, in the second category, various behavioral metrics like ARPDAU and retention can be derived and used to project outputs in the model.
Of course, this is difficult to accomplish in a spreadsheet: it’s more easily done in code, which is the reason I built and open-sourced Theseus, my Python library for cohort analysis. But grouping the DAU base into these groups is imperative, as global numbers can be deceiving, as I outline in Avoiding Simpson’s paradox in data analysis. What’s especially problematic with valuation analysis is that the peculariaties of various portions of the DAU base — if not detected via dimensionalized, granular analysis — are projected out into the future in a way that can conceal significant risks to the business.