Algorithms and Safe Harbor: What will regulation for Facebook look like?

In 1996, in an attempt to regulate pornographic material on the internet, the US Congress passed the Communications Decency Act. At the time, the internet was a nascent communications system and wasn’t covered under existing regulations that applied to television and radio. Free speech activists lobbied to have all legislation in the act relevant to indecent or obscene content struck, and in 1997 those provisions were ruled unconstitutional by the US Supreme Court in Reno v. American Civil Liberties Union.

But while the principal purpose of the CDA was to regulate pornographic content on the internet, the legislation included one proclamation that would have a defining impact on the internet at that formative stage. Section 230 of the CDA states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This provision essentially frees any Interactive Service Provider (ISP) from liability over the user generated content that they publish but do not themselves create. At the time, this allowed for communities to form around user generated content without fear of legal reprisal; Section 230 was inserted into the CDA to future-proof the internet after Republican Congressman Chris Cox read about a libel lawsuit filed against Prodigy over an anonymous comment posted to one of its message boards. Section 230 essentially provided the incipient embers of the internet with the oxygen needed to erupt into an open flame.

Section 230’s limits have been tested recently. Airbnb, for instance, was forced to verify listings that are submitted to its platform by a US District Court Judge; the city of San Francisco’s Board of Supervisors had demanded that home-sharing platforms only allow hosts who had registered with the city to advertise their listings, with a punishment of up to $1,000 per rented night for unregistered homes. Airbnb argued that such a requirement would violate Section 230, so ultimately the city changed its tack: Airbnb could freely publish listings from unregistered hosts, but if it did business with them (ie. allowed them to rent out their homes), it would be fined. For an overview of other such cases that have shaped the interpretation of Section 230, see this paper.

Facebook recently wielded Section 230 as a shield in Cohen et al v. Facebook. In the Cohen case, which was a class action lawsuit organized by the family of American victims of Hamas in Israel, the complaint argues that Facebook’s use of algorithms to promote certain content amplifies its effectiveness, blurring the lines between publisher and content producer. From the Cohen complaint:

“Facebook’s algorithms suggest friends based on such factors as friends of friends, group membership, geographic location, event attendance, language, etc. Thus, Facebook purposefully provides users who have expressed an interest in the “Knife Intifada” or stabbing Jews, by joining groups or attending events with those themes, with friend suggestions of other like-minded people, and thereby helps to build large groups of people sharing and cultivating similar bigoted hatreds and murderous inclination.”

Ultimately, Cohen (as well as a separate but related case, Force v. Facebook, filed in the same court) was dismissed. But the point about algorithmic influence on content in Cohen seems relevant and likely to surface again, especially as Facebook continues to deal with the fallout from both Russian interference in the 2016 Presidential election and the Cambridge Analytica fiasco. Just today, the FTC announced that it would investigate Facebook’s handling of personal data related to the Cambridge Analytica situation; while this investigation relates to privacy and not free speech, it may create the appearance of an opportune time to pursue litigation against the company.

This is because Section 230 may actually experience revision in the near-term future. Inspired by a lawsuit against Backpage.com, an internet classified site that is alleged to have hosted ads related to child sex trafficking rings, the Stop Enabling Online Sex Trafficking Act was passed in the Senate in a 97-to-2 vote last Wednesday and sent to President Trump, who has endorsed it, to be signed into law. Although the bill applies specifically to sex trafficking, some worry that it will lead to over- or under-moderation of content, as the bill creates an exception in Section 230 for “knowingly” assisting, supporting, or facilitating sex trafficking.

The “knowingly” qualifier makes surfacing algorithmic content problematic. In a piece I published a few weeks ago called What do Data Scientists know?, I wrote:

“This is where the tri-party disconnect becomes apparent. What the data scientists know is how an algorithm functions and how distant actual outputs are from some predicted value. This information is more or less completely detached from the underlying content that the algorithms are functioning on, which is clicks, swipes, views, etc. And for good reason: a feed is meant to be personalized based on the content a user enjoys consuming. From the algorithm’s perspective — and also from the perspective of the algorithm’s author, the data scientists — there’s no real difference between a vicious cycle and a virtuous cycle with respect to the qualitative nature of the content being consumed by a user. If a user pursues a sinister path, the algorithm merely lit the way.”

And thus the dilemma (which Eric Goldman, a law professor at Santa Clara University, has called the Moderator’s Dilemma): Facebook’s (and other social media companies’) algorithms don’t actually know much about the content they promote, yet they control their users’ feeds. When an algorithm surfaces content to a person, is Facebook held liable for it? If so, the nature of algorithmic content serving may fundamentally change.

Photo by Henry Hustava on Unsplash