Pyrrhic privacy

At least since Apple introduced the Intelligent Tracking Prevention (ITP) framework for its Safari browser back in 2017, it has been apparent that the practice of ‘tracking’ in digital advertising exists with an expiration date. Understandably so: the degree to which consumer data drifts freely throughout the ether of the internet, hanging like a thick mist of behavioral artifacts, is untenable. As I write in The IDFA is the hydrocarbon of the mobile advertising ecosystem, the general deterioration of consumer trust that accompanies the ever-more-egregious hoarding of consumer behavioral data throughout the digital advertising ecosystem is deeply problematic.

But disenchantment with the wholesale commercialization of behavioral data by the digital advertising industry has been distilled and congealed into a cannonball, aimed squarely at the consumer tech skyline, that will do more harm to consumers than good. The current rhetorical, legislative, and regulatory posturing being adopted to rein in the excesses of digital advertising — and really, the entirety of consumer technology — can best be described as Pyrrhic privacy. The concept of privacy has been positioned and activated as the direct, ineluctable consequence of stymieing “Big Tech”: the corrupted, gangrenous limb of targeted digital advertising can be amputated, the wound can be cauterized, and what’s left of the patient is the personification of privacy.

Pyrrhic privacy goes far beyond what is necessary to fortify consumers’ data, and it compromises the foundation of the open internet. Ridding the world of targeted, personalized advertising would impose enormous costs on society. The open internet is financed by advertising, and while various commentators gleefully cheer on the stock market depredations of Apple’s App Tracking Transparency (ATT) privacy policy, every billion dollars in direct response digital ad spend that evaporates represents less relevant ads exposed to consumers.

The advertising economy starts with the consumer, not with ad platforms. If consumers spend less money because the digital ads filling their monitors, laptop screens, and — primarily — mobile devices are not interesting to them, then those consumers are less commercially engaged and purchase fewer things that bring them joy, as I describe in The CPM math doesn’t work. Specialist DTC advertising agency Common Thread measures a 33% decrease in ROAS (return-on-ad-spend) for Meta advertisers since ATT reached a majority scale of iOS devices on a period-over-period basis. These dollars don’t shift elsewhere for direct response performance advertisers. They vaporize.

Proposed legislation like the Banning Personalized Advertising Act and parts of the European Commission’s Digital Services Act, as well as platform policies like Apple’s ATT (which I have covered extensively), adopt a Pyrrhic, brute force, privacy-at-any-cost model. These legal and commercial constructions of privacy sublimate the obstruction of ads personalization into a sort of moral emancipation that very purposefully ignores the benefits of personalized advertising to consumers. But draconian measures like these aren’t necessary to safeguard consumer privacy: the gangrene can be treated, and the limb can be saved.

An excellent example of this can be found in the juxtaposition of the treatments to consumer privacy being undertaken by Google and Apple. When Apple announced ATT, it also announced the release of version 2.0 of SKAdNetwork, its advertising attribution and measurement framework for iOS. I speculated when the initial version of SKAdNetwork was released in 2018 that it would redefine mobile advertising, and that has certainly proved true: SKAdNetwork was built to replace the user-level flow of data (what I call the “events stream”) in the hub-and-spoke model of digital advertising with a coarse, aggregated, campaign-level view of ad engagements and conversions. But for a number of reasons, which I outline here, SKAdNetwork is simply dysfunctional. The framework employs so many restrictions around instrumentation and granularity that the compounded effect is a diluted, delayed, obscured, inexact accounting of app installs that can’t be used for ad campaign optimization. Snap said as much in its Q3 earnings call last October, precipitating a ferocious 25% drop in the company’s stock price:

Compare SKAdNetwork with Google’s recently-announced Attribution Reporting API for its Privacy Sandbox for Android. Understanding that this feature has not yet been rolled out, so it’s impossible to gauge its effectiveness, it’s noteworthy that the solution appears designed to functionally protect privacy by obfuscating user-level data while also giving advertisers the critical, event-level data needed to improve ad campaigns. Rather than using brute force to limit the amount of context available to advertisers, Google achieves privacy through multiple attribution windows as well as a technique called differential privacy, which I explain here, to add noise to event-level data to prevent attribution to individual users. My co-authors and I explore the differences in these approaches in our soon-to-be-published paper, ‘Privacy-Centric Digital Advertising: Implications for Research‘.

This is an important distinction: Pyrrhic privacy simply curtails context to enable privacy. This reduces the scope of what can be measured and used to assess the performance of advertising campaigns. But it’s not necessary: statistical techniques and tools exist that render privacy-preserving advertising measurement possible.

The debate around personalized advertising has become so polarized, anchored in loaded and preposterous terms like ‘surveillance advertising’, that consumers would be forgiven for believing that no trade-off exists between data collection and utility: that they should not have agency in determining how their data is used because personalization is exclusively an attack vector against their sense of personal identity. My principal complaint about ATT has always been that Apple doesn’t allow for genuine consumer choice with its consent prompt: it’s packed with intimidating language that doesn’t convey the benefits of ads personalization. Almost every free app in the App Store is supported by ads, either through ads-based monetization or ads-based user acquisition, both of which are threatened with a slash-and-burn privacy model like ATT. I am not the only person to take issue with the ATT prompt’s design: notably, the UK’s Consumer Markets Authority concludes in a report on ATT that the word ‘tracking’ has a loaded connotation and is not widely understood by the general public (page 32 of this document), and that the consent prompt uses ‘dark patterns’ to encourage opt-out.

Would ATT opt-in rates be higher if its consent prompt asked users if they would prefer that the ads to which they are exposed are personalized? This is impossible to know because Apple won’t allow developers to ask that question. And the Banning Surveillance Advertising Act doesn’t contain a single instance of the word ‘consent’.

Pyrrhic privacy is unnecessary, it’s wasteful, it’s retrograde, and it’s unfair to consumers. Consumers should be circumspect when the concept of privacy is invoked in the abstract: platforms can weaponize the notion as a competitive cudgel, as I believe Apple’s ATT policy does.

In his seminal work on the necessity of legal safeguards for individual privacy, Supreme Court justice Louis Brandeis invokes the legal principle of damnum absque injuria, in which damage or loss to another is caused in a way that does not result in harm, to draw the distinction between material harms and the ‘spiritual’ harms inflicted by invasions of privacy*. These harms are real, and they exist in the digital realm. A poignant example is the case of a Catholic priest who was publicly identified as a user of the Grindr app, almost certainly through his phone’s IDFA via bid scraping as described in this article. But solving for the type of privacy abuse exemplified in this case can be accomplished with a scalpal and doesn’t require the carpet bomb of Pyrrhic privacy. For one, data emanating from the Grindr app exists in a class of sensitive information that, for instance, both GDPR and Google’s proposed Topics API prevent from being utilized for aggregation or ads targeting.

Pyrrhic privacy is not only profligate and reckless, it’s also paternalistic. Consumers are capable of making informed choices about how their own data should be used to their benefit. Consumers are also capable of comprehending that privacy is not an on/off, binary choice but rather a spectrum that involves trade-offs. Scare tactics and heavy-handed press campaigns are effective at driving rote opt-outs, but they do nothing to inform consumers’ decisions around privacy. Does a set of model coefficients related to eCommerce and in-app purchase history pose a ‘spiritual’ threat to a person’s sense of identity? And is that a choice that consumers should be able to make for themselves? Pyrrhic privacy prevents that question from even being raised.

*Braindeis’ article was written in the context of the proliferation of candid, unflattering photographs in newspapers in the nascent era of tabloid journalism