Mobile user acquisition roles are notoriously difficult to hire for: the field is fairly young, it spans multiple disciplines, and many mobile user acquisition professionals are working for large companies that pay them well. The chicken-and-egg problem with user acquisition teams makes it difficult for smaller teams to poach mobile user acquisition professionals from larger companies, which can create a vicious cycle of employee turnover and underinvestment into marketing.
This, coupled with the economies of scale that benefit large companies with respect to building proprietary analytics infrastructure and tools, means it makes less sense for mobile user acquisition professionals to join small teams than it might for other professionals. For instance, a software engineer can join a start-up and be as productive there as at a large company; when a user acquisition professional leaves a large company for a small one, they end up overseeing less budget with worse tools, potentially impeding their career progress.
But hiring for user acquisition roles is still difficult for large companies with the appeal of big budgets and cutting edge analytics infrastructure and third-party tools, mostly because of a lack of ability to glean meaningful signals from a candidate's CV. An industry-standard training system, like the two-to-three year analyst program at investment banks, doesn't exist for mobile user acquisition. These systems are valuable for two reasons: they teach a standardized set of skills to recent graduates and they also serve as a sort of stamp of approval on the candidate's abilities: by dint of being selected for one, the candidate is understood by future recruiters to have met some standard of ambition and intellectual capacity, and so recruiters can feel confident that the candidate is probably capable of doing analytical, detail-oriented work.
Nothing like this exists in mobile user acquisition -- and even worse, there's no obvious academic background that a hiring manager can assume has provided a candidate with the requisite conceptual foundation in the field. As such, hiring managers often overvalue a candidate's prior mobile user acquisition experience without questioning how rigorous it was or how relevant it is to the needs of the job that they're hiring for. Similarly, it can happen that a hiring manager presumes that familiarity with various concepts -- such as different types of bidding, or LTV timelines -- mean the candidate is able to put those concepts to work in a practical way that benefits their business. In short, it's easy for a candidate to rattle off acronyms and talk about how much budget they deployed at their last (or current) employer in an interview, but these are really terrible signals of capability.
Most mobile user acquisition roles span three skill buckets: campaign management, data analysis, and reporting. While the specific needs of a given role may have different requirements from each of these buckets, it's important for hiring managers to assess a candidate's abilities in each. Throwing around acronyms can't communicate this: the hiring manager needs to dig into the candidate's ability to make practical, independent decisions in their work.
One of the common causes of bad hires that I've observed from my clients is plausible but unsubstantiated experience: the ability of candidates to convincingly explain how various aspects of mobile user acquisition work without really understanding them. Usually this is because these candidates really only performed campaign management in their previous roles: they didn't have ownership over anything other than updating bids to reflect a change in some tool that they monitor, and they didn't understand the underlying fundamentals of why the bid was changed or what happened when they changed the bid.
Below I have broken the three functional mobile user acquisition skill buckets down into sets of interview questions that a hiring manager might ask of candidates to gauge their level of familiarity with the concepts that are utilized in mobile user acquisition. Pressing candidates on these concepts, rather than just ticking off a box next to each when the candidate mentions them in an interview, could help in ensuring that candidates are fully qualified to fill a mobile user acquisition role.
- How should we think about outliers in our LTV model? How can we make sure we don't overfit our model?
- Why might we calculate our LTV over a shorter period (90, 180 days, etc.) than one year? How should the quality and volume of monetization data impact our calculation period?
- How is daily retention calculated? What's the difference between daily retention and rolling retention, and in what situations would each of those be used?
- How should we think about virality for our app? How should we value virality and incorporate it into our LTV model? Into our campaign bidding?
- What is the VLOOKUP function used for in Excel? What is a database table join?
- I have ROI percentages put into four individual weekly rows for one month of campaign spend. Is the ROI for the month the average of those numbers?
- What is a look-back period? How does the length of a look-back period impact channel spend and impact our overall campaign performance?
- When would we use CPM bidding as opposed to CPI biding?
- How does an attribution provider track the campaigns in which users were acquired?
- What type of auction does Facebook use to serve ads? Who wins this type of auction?
- Why might our average CPI price be less than our bid on a network?
- Why does campaign install volume on a network sometimes not double when our bid doubles?
- What should we be doing to combat the most prominent forms of ad fraud?
- What do you think will happen in the mobile advertising space in the next 12 months?
- Based on our LTV model, campaign X is profitable, but users acquired from that campaign are generating less revenue per day than we are spending on it. How would you explain this to someone?
- How would you design a report to your boss for weekly campaign spend? Which key metrics would you include, and how would you visualize those each of those metrics?
- What's a histogram? When should it be used?
- If I wanted to include both weekly campaign spend and weekly ROI on one chart, how should I design it?
- What's a good cadence for reviewing campaign spend? How should that be communicated to key stakeholders?
- How would you automate a campaign report? Which data sources would you need to access, and what tools could you yourself use to automate its generation and circulation?
- Only 3% of our users spend money in our app, 0.05% have spent more than $50, and 0.01% have spent more than $500. What's a better way than Average to provide someone with an understanding of how users monetize in our app?