It's common when analyzing a freemium mobile product's metrics to focus heavily on retention as a proxy for the "quality" of a given cohort of users. "Quality" here is intentionally used ambiguously; retention can be used to gauge the relevance of the product to a cohort of users, or the degree to which the product satisfies that cohort, etc. Retention is used for this purpose because 1) like monetization, it is a key component of lifetime value but 2) unlike monetization, it can be modeled in a fairly straightforward way with a limited, early data set.
A typical retention curve (either projected or observed) might be visualized like this, with counts of active users (or percentages) on the Y-axis and days since the cohort's inception on the X-axis (in other words: the number of users retaining at Day X from when the cohort adopted the app):
Retention is a valuable measurement of how a given cohort can be expected to perform (ie. contribute revenue) within an app, and when utilized in an LTV model, this data can be used to make decisions around acquisition marketing; whether more or less money should be apportioned to the channel that brought the cohort into the app. Optimizations to this curve (that is, product improvements intending to retain more users) are made at various points within the user's journey within the app: improving the first-time user experience, adding various retention mechanics (notifications, emails, etc.), adding collaborative / cooperative functionality, etc.
But this type of approach -- modeling retention ex post of user acquisition and using that data to make decisions around marketing budgeting, and optimizing the retention curve to keep more people in the app -- neglects a very important fact: the mouth of the user lifetime funnel actually begins with the marketing campaign, and thus optimizations to the funnel can be made there as well as within the app.
Consider a user lifetime visualization that starts with 10,000 ad impressions for an app being shown (10 CPM), is affected by a click-through rate (CTR) of 5%, an install rate (the rate at which people install an app after visiting its platform store page) of 60%, and extends into Day 30 retention (60, 30, 15% for Day 1, 7, and 30, respectively):
Here it is obvious that the largest single decrease in the funnel takes place from ad impression to click: just 5% of people that saw an ad for the product clicked on it.
Consider another cohort, with the same retention profile as the above but which adopted the app from a higher click-through rate on the advertising campaign (6% CTR vs. 5% CTR):
It should be obvious that a 20% increase in CTR percolates throughout the entire funnel to create a 20% larger cohort at Day 30; it's intuitive that higher CTRs would expand the mouth of the advertising funnel and create larger cohorts, given that all else (install rate and the retention profile) is held constant. Of course, that's often not the case: many times, an increase in CTR is accompanied by a decrease in install rate (and possibly retention). This is the click-through rate conundrum.
Because of this, a larger funnel at the mouth -- that is, the advertising campaign -- might not be the ideal way to produce an optimal number of absolute users at some later-stage retention date (eg. Day 30). Consider a third cohort, derived from a lower CTR than the first two (4% vs. 5 and 6%) but from a higher install rate (90% vs. 60% in both of the other cases). This could happen with much more specifically targeted advertising creatives: ads that are meant to very accurately reflect what, exactly, the product is and does, attracting only the most relevant users and repelling those for whom the product would not be a good fit.
While this cohort is smaller than the first two at the click stage (that is, fewer people clicked on the ads), it is the same size at later stages in the funnel.
Now, finally, consider a fourth cohort, derived from the same CTR and IR (4% and 90%, respectively) as the third but with a higher retention profile of 65, 33, 16% Day 1, 7, and 30 retention (all three previous cohorts were visualized using the same 60, 30, 15% retention profile). Such a case isn't hard to imagine: given a lower CTR (more specific / direct ad creatives) and a higher IR (more relevant users), it's conceivable that the users downloading the app from this campaign would have a more relevant use case for it and would thus retain better because they have a better idea of what they're getting in downloading it.
This cohort is larger at Day 30 than all three previous cohorts, yet it's smaller at the click stage (because it has a lower CTR).
This exercise hopefully illustrates the need to consider retention within the context of the marketing funnel (and vice versa); or, put another way, to optimize on the basis of one unified user journey funnel as opposed to two perceived independent funnels (marketing and in-product).
The two über-points here are that 1) considering the marketing and in-product funnels independently delivers local optimizations that might not actually facilitate a maximum total level of late-stage retention (and, hopefully by proxy, revenue). Simply optimizing for CTR might produce low-retaining users, and simply optimizing for retention could constrict scale (and thus overall revenue).
2) Scale is critically important in freemium. For this reason, looking only at percentages (be they retention rates, click-through rates, or install rates) can blind a marketer to the harsh reality of low monetization conversion in freemium. Analyzing product metrics via percentages is important, but it's also important to consider absolute user numbers: a campaign producing both very high CTR metrics and very high retention metrics might not ultimately be the best source of users if it can't scale to produce an adequately high level of DAU.