Cambridge Analytica was a false panic. It’s time to move on.

In March 2018, the New York Times and the Guardian jointly introduced the world to an erstwhile obscure advertising analytics consultancy called Cambridge Analytica. Cambridge Analytica had engaged with both the 2016 Trump presidential campaign and for the Leave.EU campaign that advocated for the United Kingdom’s exit from the European Union (“Brexit”) ahead of that country’s 2016 referendum vote. Cambridge Analytica developed a model to build what are known as “psychographic profiles” of social media users that it claimed could be used to target advertising to audiences based on, among other things, receptiveness to specific political messaging. The scandal that was unearthed in March of 2018 had to do with Cambridge Analytica’s use of Facebook data that it had sourced without permission from tens of millions of users (the original reporting cites 50MM, but Facebook later clarified that the number was more than 87MM).

I explain how Cambridge Analytica sourced this data in a piece that I wrote shortly after the scandal was exposed titled Parsing fact from fiction in the Cambridge Analytica fiasco. But to expound briefly: Cambridge Analytica acquired the data set of Facebook user profiles from a researcher at Cambridge University who harvested it through a Facebook quiz app. When users consented to exposing their profile data to the quiz app, those users also unwittingly gave access to the profile data of all of their friends, gleaned without the need for any additional approval through the Facebook API. Only 270,000 users ultimately interacted with the quiz, but through the degree of separation access, the scope of profiles harvested ballooned well beyond that group.

The fact that Steve Bannon, the former executive chairman of Breitbart News and CEO of Trump’s 2016 presidential campaign, sat on the board of Cambridge Analytica cast a sinister pall on any interpretation of the company’s work. The whistleblower that brought Cambridge Analytica to the world’s attention, former employee Christopher Wylie, certainly sold this hair-raising and scandalous narrative to the press in a deliberate and calculated manner. “I made Steve Bannon’s psychological warfare tool” is the title of a Guardian piece that profiles the programmer and in which he declares that he created “Steve Bannon’s psychological warfare mindfuck tool.”

Ultimately, the “Cambridge Analytica scandal” relates to two separate narrative threads that are often conflated. The first is that Facebook’s Graph API allowed apps to access the profile data of users’ friends without consent. A group of researchers from KU Leuven published a paper in 2017 that revealed the privacy deficiencies of Facebook’s Graph API, before Cambridge Analytica’s transgressions were made public. From the paper:

In other words, consent may be provided by the actual user of the application and not by the data subject, whose data are going to be processed in the end. On Facebook Apps settings, users allow by default their data to be used by the applications used by their friends without their consent under the title “Apps other use” unless they manually unclick the relevant boxes. One could claim that consent has been theoretically given, however, according it should not be considered as valid as it is not informed.

Facebook announced sweeping changes to its Graph API and privacy settings following the Cambridge Analytica scandal, but the damage was done. Facebook paid inadequate care to user privacy, and its frivolous insouciance ultimately resulted in the company paying a $5BN fine to the FTC in settlement of a probe by the regulator triggered by the Cambridge Analytica episode. The Tow Center at Columbia University describes as a “breach” the method by which Cambridge Analytica sourced Facebook user data despite the fact that it was consistent with Facebook’s Graph API terms at the time. Others disagree that the access constitutes a breach, although Facebook did request written confirmation from Cambridge Analytica that it had destroyed the data harvested by the quiz app in 2015, when it learned of the transfer. The specific term used isn’t important; whatever semantics applied render the episode no less ignominious.

What is generally conjured when the Cambridge Analytica scandal is invoked is the second narrative thread: that a shadowy political consultancy used psychological sophistry to get Donald Trump elected and to engineer Brexit. This narrative is complete fiction. Cambridge Analytica never sold anything but snake oil — and in the case of the Brexit campaign, the company gave it away on a pro bono basis. The notion that “psychographic profiles” could be used to precision-target people with political messaging was widely met with ridicule and derision by marketers, psychologists, and data scientists alike.

But ultimately, the proof is in the pudding. After a three-year investigation, the UK’s Information Commissioner’s Office determined that Cambridge Analytica’s black magic was wholly banal: that the company’s marketing had dramatically overstated its actual targeting capabilities. From the report:

On examination, the methods that SCL were using were, in the main, well recognised processes using commonly available technology…We understand this procedure is well established within the wider data science community, and in our view does not show any proprietary technology, or processes, within SCL’s work…the investigation identified there was a degree of scepticism within SCL as to the accuracy or reliability of the processing being undertaken. There appeared to be concern internally about the external messaging when set against the
reality of their processing.

While no such official repudiation of Cambridge Analytica’s marketing assertions exists from the US government, it’s important to remember that Cambridge Analytica worked with Ted Cruz’s 2016 campaign during the Republic primary race — which Donald Trump won, assisted by very little professional apparatus or marketing technology at the time. As Martin Robbins put it in his blog, Little Atoms: “the story of the Republican primaries is actually that Cambridge Analytica’s flashy data science team got beaten by a dude with a thousand-dollar website.”

This interpretation is widely shared. In On Digital Disinformation and Democratic Myths, political communications scholar David Karpf writes:

An investigation from Nature magazine documented that the evidence of Cambridge Analytica’s independent impact on voter behavior is basically nonexistent (Gibney 2018). There is no evidence that psychographic targeting actually works at the scale of the American electorate, and there is also no evidence that Cambridge Analytica in fact deployed psychographic models while working for the Trump campaign. The company clearly broke Facebook’s terms of service in acquiring its massive Facebook dataset. But it is not clear that the massive dataset made much of a difference.

Cambridge Analytica oversold the potency and efficacy of its services. Nonetheless, the scandal is mostly invoked when the danger that Facebook and other social media platforms pose to democracy is being articulated: that well-heeled political actors can manipulate elections and subvert voter intentions by hijacking the social media algorithms that drive our corporeal sensibilities. The Cambridge Analytica scandal is fundamentally a story about non-consented data access and misuse, but it is positioned and utilized as a catalyst to a moral panic every time a social media platform is criticized for anything.

Tristan Harris, appearing on Real Time with Bill Maher, describing the Wall Street Journal’s “Facebook Files” reporting, called those revelations a “Cambridge Analytica-sized moment.” Yes, the Cambridge Analytica story was colossal, but only because it was misunderstood. Cambridge Analytica was a cleverly-marketed yet thoroughly mundane advertising agency that worked with populist political campaigns by dint of its financing. Using the case of Cambridge Analytica to indict social media beyond a product class that is susceptile to data misappropriation is either misguided or disingenuous.

Photo by Nick Fewings on Unsplash