The user onboarding funnel, or FTUE, tends to receive the majority of optimization attention from product managers and data analysts relative to other segments of the user lifecycle (such as eg. days 90-120 in the average user’s product lifetime). This makes sense, for a few reasons:
1) The most precipitous drop in engagement tends to be observed in the first few days of product exposure (which is the source of the negative exponential shape of the retention curve for most consumer tech products, especially freemium products);
2) The onboarding funnel is most associated with growth because it is the first exposure that new users have with the product, and so the early portion of the retention curve can seem more consequential to marketing budgets than the remainder of the curve (this notion is amplified by the severe dropoff of most retention curves in the first few product sessions);
3) Onboarding optimizations produce the most immediate metrics feedback — the impact on Day 1 retention from changes to the first session flow can be observed tomorrow — and so they’re easier to quantify and less complex to analyze. Changes to mid-lifecycle product mechanics take much longer to track for new users and are more susceptible to being confounded by externalities.
As tempting as it is for a product team to focus a disproportionate amount of its energy on optimizing the onboarding funnel and earliest portion of the user experience, ignoring the elder period of the user timeline is a mistake. Strong mid- and late-stage retention is the foundation of sustainable DAU growth: a flat retention curve following the typical onboarding retention cliff allows for cohorts to accumulate and compound much more efficiently than if the retention curve continues to decline.
Many people consider a “leaky bucket” to refer specifically to the onboarding funnel, but in reality user churn is more expensive later-stage than during the onboarding process: a user who has been in a product for a year is by definition more engaged than one who just joined. Losing the user with a year of tenure is more damaging to the business than losing the early-stage user because of that proven affinity and acquired behavioral data.
This was a lesson that the data and analytics team at Skype took to heart while I was there. Because Skype was so viral, early user churn was extremely high and it was mostly related to a lack of contacts on the platform for new users: if a user installed Skype and didn’t know anyone who used it, the product very useful to them as a messaging service, but users that were able to get to some number of contacts on the service tended to use Skype for a very long time (measured in years). As a free, viral product that reached a huge swath of the internet-connected population and also received an incredible amount of (undirected, low-intent) earned media, early-stage churn was unavoidable but late-stage churn was unacceptable: ensuring that the retention curve stayed flat for engaged was the only way to accumulate DAU over the long term.
To illustrate this, consider these two products with two different retention curves.
Product A has a fairly high Day 1 retention value of 60%, but its retention curve deteriorates dramatically:
Product B has a Day 1 retention metric that is half of Product A’s at 30%, but its retention curve is much flatter:
Product B compounds cohorts over time into DAU much more efficiently than Product A as result of its flat retention curve. If 100 cohorts of 1,000 users are pushed through both Product A and Product B, Product B ends this 100-day period with 12,740 DAU versus Product A’s 7,745 DAU. Over this period, Product B’s DAU curve continues to grow almost linearly while Product A’s inflects and begins to approach an asymptote.
Of course, these numbers were chosen arbitrarily and don’t necessarily reflect the reality of any live products. In this example, Product A might have far lower effective user acquisition costs or be much more viral than Product B and thus actually be far more profitable on a marketing basis than Product B. There are myriad reasons why a Product A scenario might be preferable to a Product B scenario, but the point remains that the “leaky bucket” label doesn’t exclusively apply to onboarding or early stage retention: in this example, Product A is far more of a leaky bucket than Product B despite retaining twice the number of users on Day 1.
A maniacal focus on early-stage retention is important, especially as part of a paid marketing feedback loop, but because early-stage retention is usually heavily scrutinized by a user acquisition team already, the product team should be invested as much if not more in addressing and optimizing late-stage retention as they are in early-stage retention.