Late last month, representatives from Facebook, Twitter, and Google appeared before a Senate judiciary committee to discuss their platforms’ exploitation as propaganda channels during the run-up to the 2016 presidential election. Much of the process was focused on how these companies failed to detect the foreign provenance of Russian-sponsored political ads, which are thought to have reached up to 10MM people on Facebook in the months ahead of and after the election. Additionally, Facebook estimates that up to 126MM people (126MM on Facebook and 20MM on Instagram) may have been exposed to organic posts created by Russia’s Internet Research Agency between 2015 and 2017.
One interesting part of the testimony came in an exchange between Senator Al Franken and Facebook’s General Council, Colin Stretch, around Facebook’s provision of certain outrageous and reprehensible interest-based targeting parameters via its self-serve advertising platform. The existence of these targeting parameters was first reported by Pro Publica; Facebook claims that the parameters were created algorithmically based on common self-reported “interests” from users. Senator Franken confronted Colin Stretch about the story in this relevant portion of the testimony:
To my mind, the crux of the Senate judiciary committee hearing (and the House Intelligence Committee hearing that all three company representatives attended the next day) can be broken down into two sets of issues: 1) are Facebook, Google, Twitter, et al responsible for policing the content that is created and distributed on their platforms?, and 2) are these companies (and other advertising platforms) similarly responsible for policing how their advertising algorithms are utilized for the purposes of targeting users with ads?
The first set of issues seems far more penetrable and manageable than the second. In many cases, these companies own the real estate that is being used by their members to promote ideas: for instance, the Facebook pages that were created by Russian operatives related to the US election. All of the large social networks enjoy broad rights over the use of users’ content once their Terms of Service agreements have been consented to: it makes no sense that any accompanying responsibilities and liability for that content can simply be abdicated. The largest social networks are not mere conduits of thought; the internet legislated over in the 1996 Communications Decency Act is not the same internet in which they reside (or control), and its safe harbor provision shouldn’t be applied to these networks.
But the second set of issues — around the use of algorithms to create targeting parameters based on behaviors and self-reported interests — is more intractable. Firstly, anyone who has ever worked with these types of algorithmic tools understands how opaque they are: for instance, the system that categorizes articles submitted to and crawled by this website utilizes a table containing more than 1 million rows that do nothing more than map category relevancy coefficients to keyword set identifiers. Looking at this table reveals nothing: it’s a collection of decimals and whole numbers. A more comprehensive analysis could reveal actual keywords and category connections, but very few of those, aside from the obscene, would be obviously and unquestionably irrelevant, offensive, or improper. “Hate” could very easily be germane following the release of a new phone model or free-to-play game; “kill” could describe an important decision by a company to stop supporting a streaming service.
But more important than these technical challenges, which can be mediated with enough engineering support, is the question of whether “ethics” can capably be applied to an advertising algorithm. A recent New York Times article about the work of a handful of volunteer analysts, which partly led to the Congressional hearings involving the social media giants, addresses the relatively straightforward part of the problem: demonstrably, objectively “fake” information shouldn’t be allowed to be created and distributed on these services. On this front, Facebook has acted with impressive earnestness: it will double the size of its staff working on sensitive security and community issues over the course of 2018.
But the unwieldy part of the problem, which was not addressed in the New York Times article, pertains to the kind of preference- and behavior-based targeting that the algorithms utilized by the large social media companies were specifically designed to accommodate.
The example used in the article was about how Facebook surfaced information around conspiracies to an account that had originally “liked” a page that promotes a link between vaccines and autism. But is this a bug rather than a feature? Had the initial “page like” been given for a page about the next Deadpool movie, and the resultant surfaced information been news articles about Justice League or Avengers: Infinity War, the algorithm would be operating as expected. These algorithms are designed to increase engagement on the part of users: can the algorithm be expected to operate against a second dimension that relates to ethics? And if so, whose ethics?
Again, some cases are clear cut: a social network shouldn’t allow users to distribute objectively false information. Social networks should also protect their services from being used by foreign actors to distribute political ads related to a US election (which is illegal). But when do free speech laws collide with the expectation that an algorithm will surface information to users that they’ve shown an affinity for, creating an echo chamber for that person? And which echo chambers should these algorithms permit? If comic book movies are ok, should conspiracy theorists be allowed to live in their own delusional digital enclaves — and if not, where is that line drawn? This seems to me to be perhaps the most towering and significant question that the consumer technology industry at large will face in the years to come.