In late October, representatives from Facebook, Google, and Twitter testified before a Senate judiciary committee about Russian meddling in the 2016 US presidential election. The day did not go well: everyone present acknowledged that an agency within the Russian government had influenced the US election, but neither side presented a comprehensive substance of thought.
Of course, the problem is obvious and severe. In the months before and after the 2016 US presidential election, Russian state-sponsored political ads reached an estimated 10MM people on Facebook, and Facebook has estimated that, between 2015 and 2017, up to 146MM accounts may have been exposed to organic posts created by Russia’s Internet Research Agency.
Likewise, Twitter admitted in a letter to the Senate Judiciary Committee in January that in the weeks leading up the 2016 election, about 50,000 “bot” accounts linked to Russia had retweeted Donald Trump almost half a million times and Wikileaks about 200,000 times. In the aftermath of the leak of John Podesta’s emails, these bots were responsible for about 5% of all tweets containing the #PodestaEmails hashtag, and the company recently revealed that about 1.4MM accounts interacted with tweet content from bots of Russian origin during the 2016 US presidential election cycle.
Clearly these platforms hadn’t been safeguarded against exploitation; the ability of Russian governmental agencies to reach staggering numbers of Americans with modern agitprop exposes not only weaknesses in these companies’ defenses but also a collective lack of imagination around how social media could be subverted in the first place. But the public discourse around safeguarding social media shifted quickly from an acknowledgement of its vulnerabilities to an outcry for stringent regulation. Given the value and power of social media, this seems hasty and misguided.
At a previous employer, I assembled a team to build an algorithmic advertising system that resembled the kind that the biggest social media networks utilize to prioritize and serve content. Via their clicks and swipes and taps, our players told us what they liked about our mobile games, and we used that data to give them more of exactly that.
It’s easy to see how a system like this could serve nefarious purposes. What if my team had evaluated users not on the way they obliterated their enemies in our games, but rather on their receptiveness to politically-charged conspiracies: that Obama was born outside of the United States, or that the Clintons are murderers, or that climate change is a hoax engineered by the Chinese government?
The problem is, both of these use cases — the harmless use of data to shift users into games they’ll likely enjoy and the sinister animation of bias, fear, and prejudice — are legitimate applications of this type of content-surfacing algorithm. It is this flexibility of purpose that confounded the Senate judiciary committee: a disconnect between a group of venerable lawmakers wary of technology and a group of enterprises operating as if in a technocracy.
Machine learning and the use of algorithms to surface content have made the technology products we use more personal, more relevant, and more engaging. That they can be abused is not an indictment of the algorithms’ general utility; we are in the early stages of exploring human-machine design principles, and the exploitability of these algorithms is a growing pain, not a structural flaw that necessitates cumbersome regulation.
We know what reactionary, self-defeating privacy regulation looks like. The EU’s General Data Protection Regulation (GDPR), a set of digital privacy directives that go into effect this coming May, is overbearing, imperious, and simultaneously anti-consumer and anti-commerce: while it does introduce some commendable concepts, the GDPR levies punishing, potentially existential sanctions against companies that don’t comply with its vague and wide-ranging demands.
The GDPR is a governmental overreach that the platform operators and their users alike should consider a worst-case scenario: a well-intentioned yet misguided and retrograde set of regulations that will diminish speech and abate the voices that have been gifted to the voiceless by social media the world over. GDPR has been likened to the requirement of seat belt use in cars: a wholly beneficial safety directive that legislates against bad behavior. But the GDPR will deteriorate the utility of apps and websites for EU users — it’s more akin to car manufacturers being forced to impose a maximum speed of 25mph on their cars to prevent fatalities in high-speed car crashes.
To deter the looming spectre of GDRP-style regulation in the US, the biggest social media platforms must educate users as to how their data is being used. Fred Wilson, the distinguished New York-based venture capitalist, has proposed a “Why?” button on social media: upon being clicked, the platform explains how past likes, swipes, searches, and clicks have led users to the content they’re currently viewing. This kind of transparency would give users a greater sense of agency over their experiences and also help them steer away from the darkest frontiers of social media.
If Facebook, Google, Twitter, etc. can’t prove that they are making appreciable progress towards strengthening their platforms against abuse — or, worse yet, if another election is compromised by a concerted, state-sponsored disinformation campaign — the demand for regulation from the public may become insurmountable. The social media giants have an opportunity to avoid a GDPR-esque future in the United States, but they must take very seriously their responsibilities as the stewards of their content consumption algorithms. So far, these algorithms appear abandoned, and the wicked custodians are circling.
Photo by Pedro Lastra on Unsplash