I read a great interview with the COO of GamesAnalytics today which discussed the state of analytics in the gaming industry. This poignant observation struck me (emphasis mine):
Games development is usually based on the average player or what is commonly presumed to be the average player. In fact as the profile of gamers expands it is very difficult for developers to say what the average player is.
Oftentimes developers will build games from their own perspective and they usually are much more competent than the average player.
By letting player analysis drive design decisions you can really understand the competency levels of players, adjust the difficulty curve to be orientated around the players in your game, and thus increase the levels of engagement.
Averages in the freemium gaming industry — in which 100% of revenues are generated by 3-5% of players and 90% of revenues are generated by less than 1% of players — are meaningless. Using averages to inform decisions in the design of gameplay and monetization mechanics will produce disastrous results; revenue averages across a game’s entire population (ARPU) will always underestimate revenue values for paying players because paying players represent a tiny percentage of most games’ user bases — and even within the ranks of paying players, the vast majority of revenue is generally generated by a small subset (thus invalidating the ARPPU metric in most cases).
The only way to optimize revenue in gaming is to segment users into behavioral monetization groups and engage with them accordingly. Build a logistic model (will pay / won’t pay) based on the data you have; try to classify your users as quickly as possible; use bayesian methods to improve upon your model and reduce the amount of time needed to classify future users. I have illustrated this process in the below diagram:
Notice that there’s no averaging involved in this process; each player is classified and given a curated experience based on that classification. One link I left off this diagram is the relationship between the player’s end state and the LCV associated with that player and the channel through which she was acquired. A more nuanced approach is needed for this than can be accommodated by a diagram.
My “trigger” experiment is something which is designed to indicate whether this player has a high predilection for buying in-game items. For players to whom my game’s products hold great appeal, each opportunity missed in not presenting purchasable items results in an increasingly less-enjoyable experience. When players want to buy things, not letting them buy things spoils the game for them. The player’s state upon leaving the game is used to improve upon this experiment’s accuracy.
Averages are dangerously simplistic when attempting to optimize gaming experiences and will almost certainly result in serious underinvestment in acquisition campaigns. The freemium business model is not suited for superficial analysis; its viability requires productive use of the massive volumes of data it generates. Orienting acquisition campaigns or gameplay features around the average player will only result in the 3-5% of players that buy in-game items being offered an underwhelming experience.