Home

The US Supreme Court Could Change the Internet as We Know It

Chantal Joris / Feb 21, 2023

Chantal Joris is a Legal Officer at ARTICLE 19, an international human rights organization which works around the world to protect and promote the right to freedom of expression and information. ARTICLE 19 and the International Justice Clinic at the University of California, Irvine School of Law filed an amicus brief in Gonzalez v. Google.

This week, the Supreme Court will hear arguments in two cases – Gonzalez v. Google and Twitter v. Taamneh – that have the potential to fundamentally change how social media companies recommend and moderate content. Both cases will impact the scope of Section 230 of the Communication Decency Act which shields internet platforms from liability for content posted on their platforms. Section 230 is viewed by many as one of the foundations of internet freedom.

The cases centre around the question whether or not social media companies can be held liable for terrorist content they host and recommend. Should the Supreme Court narrow the protection from liability for illegal user content granted to internet platforms under Section 230, it could jeopardize the free expression rights of billions of internet users in the United States and around the world.

Different legal questions with similar consequences for free speech

Both Gonzalez and Taamneh were brought by plaintiffs who are seeking to hold platforms liable under the Anti-Terrorism Act (ATA), as amended by the Justice Against Sponsors of Terrorism Act, for the killing of their family members in ISIS attacks in Paris and Istanbul. The questions before the Supreme Court in each case are distinct but intimately related.

In Gonzalez, the plaintiffs claim that Google should be liable because YouTube recommended ISIS videos. This argument was rejected by the Ninth Circuit. The Supreme Court will have to decide whether recommendation systems are covered by Section 230 and thus immunized from liability under the ATA. In Taamneh, the question is whether – as accepted by the Ninth Circuit - social media companies may be liable for aiding and abetting an act of international terrorism under the ATA if they have generalized awareness that a terrorist organization used its service. Whether Section 230 may apply is not part of the Taamneh appeal, since the district court – and thus the Ninth Circuit - did not consider this point. Holding platforms liable under the ATA for their users’ speech, would, however, undermine the immunity from liability for user-generated content granted to intermediaries by Section 230.

To assess the potential implications of the Supreme Court decisions it is important to understand in detail the allegations presented by the plaintiffs. In both Gonzalez and Taamneh they take a similar approach to assigning culpability to the companies. In essence, they argue that the companies should be liable for supporting ISIS by operating communications platforms that are widely available to the general public and which can be accessed and used by terrorist organizations to disseminate propaganda, radicalize individuals and recruit members.

According to the Ninth Circuit, neither of the plaintiffs asserted that the platforms’ algorithms specifically targeted or encouraged ISIS content or treated ISIS-related content differently than any other content by third parties. It was further not alleged that Twitter or YouTube (and its parent company, Google) had any intention to support ISIS or that these companies necessarily had actual knowledge of specific pieces of ISIS-related content. Finally, it was also undisputed that the platforms had policies in place that prohibit posting content promoting terrorist activity, regularly removed ISIS-affiliated accounts and content, and that they had no connection to the terrorist attacks in question.

While plaintiffs argue in the Gonzalez case that they seek to hold Google liable for its recommendation systems – that is, organizing and displaying third-party content – the core of the allegations in both cases is that Google, Twitter, and other companies failed to take aggressive enough measures to prevent ISIS from using their services and to remove its content. Their claim for liability effectively deals with the illegality of the content rather than the recommendation systems. A decision in favour of the plaintiffs in both Gonzalez and Taamneh would therefore entirely undermine the purpose of Section 230. To avoid liability – including for the type of content-neutral recommendation systems at issue in Gonzalez – intermediaries would effectively be forced to screen all information published on their platforms and make complex decisions, at an enormous scale, on whether their users’ speech is of illegal nature and should thus be removed. Herein lies the threat to free expression.

Shielding platforms from liability for user-generated content protects free speech

The purpose of shielding platforms from liability for the content generated by their users is not to protect the companies that operate these platforms. It is to protect freedom of expression. This has been confirmed by United States courts, the European Court of Justice and the special mandates on freedom of expression alike.

In the face of potential liability, platforms will restrict free speech if this will limit their exposure. The risk of over-removal is particularly high when it comes to complex categories of speech such as ‘extremist’ or ‘terrorist’ speech. Importantly, freedom of expression standards also protect speech that is perceived as controversial, shocking or offensive. This is exactly the type of speech which may fall victim to over-removal by platforms, even if it enjoys protection under international freedom of expression standards. It should not be for private entities that lack basic tools of independence and accountability to decide whether their users’ speech is illegal. Under international human rights standards, such decisions should be up to independent public institutions.

The dangers of over reliance on automated content moderation tools

The Supreme Court’s potential acceptance of the plaintiff’s arguments could effectively impose an obligation on platforms to proactively monitor all content generated daily by their hundreds of millions of users worldwide. This would be massive interference with users’ privacy rights – which would only be possible with extensive reliance on automated content moderation tools.

Some, including the Ninth Circuit, have suggested that automated tools have advanced to the point where they can accurately identify unlawful speech. This is not accurate. Online intermediaries already use tools such as digital hash technology, image recognition, or natural language processing (NLP) to enforce compliance with their moderation policies. Human rights observers know all too well that they come with serious risks to free speech, given the number of false positives.

Automated tools can be successful in identifying certain types of illegal content namely the narrow category of child sexual abuse material (CSAM). In the case of CSAM, there is an international consensus on its illegality and it is illegal regardless of context and tone. There are thus clear parameters with respect to which content should be flagged and removed.

But assessing complex categories of speech such as “hate speech”, “extremist” or “terrorist” is different. Definitions around extremist or terrorist content are notoriously nonexistent or vague. The identification and correct assessment of such content requires a level of sensitivity to context and nuance that can only be provided by human reviewers with the necessary language skills and understanding of the political, social, historical and cultural context.

Using automated tools can result in unintended censorship as they are often unable to comprehend the tonal and contextual elements of speech, or identify if speech is satire or published for reporting purposes. For instance, the Syrian Archive, a project that aims to preserve evidence of human rights violations and other crimes committed during the conflict in Syria has documented that videos of war crimes were removed from Youtube, leading to sometimes permanent losses of what might be crucial evidence of atrocities committed on the ground. In May 2021, the ACLU reported that blunt automated detection systems led to the deactivation of dozens of Facebook accounts of Tunisian, Syrian, and Palestinian activists and journalists covering human rights abuses and civilian airstrikes.

False positives are particularly prevalent when it comes to speech in languages other than English. Indeed, most automated content moderation tools display lower accuracy when applied to speech in other languages and cultures. The presence of bias in automated tools also runs the risk of further marginalizing and censoring groups that already face disproportionate prejudice and discrimination.

Lack of transparency and accountability further complicates the issue

The lack of transparency and accountability that comes with the application of automated tools raises further concerns. It is unclear how datasets are compiled, how accurate automated content moderation tools are, and how much content they remove both correctly and incorrectly. Transparency about the effectiveness and accuracy rate of these content moderation tools is, however, fundamental to assess whether their deployment is necessary and proportionate.

In particular, users need to understand the reasoning behind restrictions of their speech to enable accountability and redress. While some platforms have started providing more transparency with respect to their enforcement practices and greater access to appeal mechanisms, they often fail to provide sufficient information to enable users to exercise their rights: including information on which policies exactly were breached or if the decision to remove content was made by an automated process. This limits the effectiveness of appeals processes.

In sum, the use of automated content moderation tools already endangers protected speech. The wrong outcome in Gonzalez and Taamneh would only worsen the problem. Platforms would be encouraged to rely even more heavily on these tools in a manner that would deny all sorts of lawful speech to limit their exposure to an over-expansive liability regime.

Accountability: Yes, but how?

There is no doubt that some of the largest social media companies operate on business models that present a threat to freedom of expression and other human rights, that are not conducive to healthy public debate and that often silence minority voices. Regulatory solutions that focus on greater transparency and user choice when it comes to recommender algorithms, the banning of targeting techniques using observed and inferred personal data, and mandatory human rights due diligence are therefore overdue in many jurisdictions. Social media companies could and should do more to prevent their platforms from becoming breeding grounds for radicalization. We should certainly not categorically exclude liability for any type of recommendation system – each case requires a careful consideration on its merits. But if platforms are found liable for content-neutral algorithms that recommend third-party speech this could dramatically curtail freedom of expression online.

This does not take away from the fact that litigation is becoming an increasingly popular tool to seek accountability of social media companies for the impacts of their services, including for their role in fueling tensions and exacerbating violence during conflicts. Lawsuits have been brought in the US and the UK over Facebook’s role in the genocide of Rohingya Muslims in Myanmar, and more recently in Kenya for the company’s role in allegedly promoting speech that led to ethnic violence and killings in Ethiopia. Access to remedy for affected communities is essential if companies fail to respect human rights. Discussions are set to continue on how victims can seek redress in a manner that effectively addresses platforms’ harmful business model without asking them to exercise censorious powers over our speech.

The Supreme Court hears arguments in Gonzalez v. Google on 21 February 2023 and in Twitter v. Taamneh on 22 February 2023.

Authors

Chantal Joris
Chantal Joris is a Legal Officer at ARTICLE 19, an international human rights organization which works around the world to protect and promote the right to freedom of expression and information. Chantal’s work focuses on platforms regulation (content aspects), freedom of expression in armed conflict...

Topics