Some interesting insight into how Zynga operates emerged in the recent Reddit thread started by one of the 520 employees laid off last week. Two posts, specifically, stood out to me:
Post 1: On the games side of things I think their whole concept of pulling data on everything players do is amazing. However their over reliance on that is not so amazing. It made the development very analytical, and less intuitive. It’s easy to tell when a game is fun. It’s hard to pull data on that though.
Post 2: At some point, the company seemed to switch over from genuine attempts at innovation to attempts to ‘do what worked before.’ I’m not sure how many games in the past year really had true A-B tests to try to discover new fun AND popular ground. The company excelled at data gathering. Data interpretation and understanding of gaming psychology could have used improvement.
Zygna appears to be incurring the hidden costs of A/B testing.
The explicit costs of A/B testing are obvious: A/B tests take time to design, implement, and measure, and this time represents an opportunity cost given that other endeavors are not undertaken. The explicit costs of A/B testing are often cited as a justification for not engaging in comprehensive testing, but I believe these costs are, generally speaking, vastly overestimated.
If more than about 30 minutes is required to design and implement an A/B test, the organization has not dedicated sufficient resources to analytics infrastructure to A/B test using internal tools. This, in itself, is not a problem: products like Swrve exist to remove this burden from the developer through the provision of “analytics as a service”.
A truly data-driven organization is in a perpetual state of A/B testing and optimization. And in such an organization, the opportunity cost of A/B testing is trivial.
The hidden costs of A/B testing, however, are of greater consequence. The hidden cost of A/B testing are born from an intellectual over-reliance on iterative, incremental improvements; this over-reliance blinds the organization to fundamental, existential shortcomings in the product.
Incremental improvements are not wholly immaterial: they often take the form of single-digit percentage increases to a process — be it landing page conversion, aggregate revenue, session length, etc. — and they can compound over time to deliver real, appreciable benefits. This is a graph of a process that is experiencing 10% period-to-period growth:
But an obsession with instrumenting incremental improvements in a process can distract a team from real problems in their product. This isn’t opportunity cost, i.e., “Should we choose between further A/B testing or fixing the structural weakness in our product?”. This is a failure to recognize or consider the existence of core weaknesses in the product because incremental improvements are considered a cure-all. This is a graph of a process that is experiencing 10% period-to-period growth concomitant with 20% end-of-period decline:
The hidden costs of A/B testing don’t materialize in the form of misapplied resources, they materialize in the form of the belief that data science can supplant product development. A/B testing a failing product is like going to the gym with a gunshot wound: your aesthetics may improve, but your life expectancy won’t.
Organizations with data problems sacrifice true macro-level growth for incremental improvements in micro-level processes that either don’t attenuate the product’s fundamental problems or actually accelerate the rate at which they metastasize.
A/B testing is a tool, not a product development strategy; it should be used to bring further delight to users who already enjoy a product, not to convince them that their intuition about the product is wrong.