Home

Gonzalez v. Google: A Perspective from ADL

Ben Lennett / Feb 19, 2023

Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.

Ahead of oral arguments in Gonzalez v. Google, LLC at the Supreme Court next week, I sent a short questionnaire to gather perspectives and legal opinions of different organizations that filed briefs with the Court. It asked organizations for their perspective on the Gonzalez case, and the arguments by the Petitioner and the U.S. government that urged the Court to narrow Section 230 protections. I also asked for opinions on the Zeran v. AOL decision that largely shaped the U.S. courts’ interpretation of Section 230’s immunity protections.

Below are responses provided by Steven Freeman, Vice President, Civil Rights at the Anti-Defamation League (ADL). Read ADL’s full amicus brief here.

Why does this case matter to the organization you represent?

ADL’s mission is to stop the defamation of the Jewish people and to secure justice and fair treatment to all. ADL’s Center for Technology and Society (CTS), works across four key areas—policy, research, advocacy, and incident response—to generate advocacy-focused solutions to make digital spaces safer and more equitable. For years, CTS has researched how platforms amplify hate and extremism through their user interfaces, recommendation engines, and algorithms. The past several years have witnessed a major shift in the proliferation of hate online and the destabilizing and violent consequences of that proliferation, both online and off. Social media platforms are pushing volume and virality in service of their bottom lines, endangering vulnerable communities who are most at risk of online harassment and related offline violence. The spread of hateful and extremist content has systemic effects, and also impacts individuals on a daily basis.

What is your position generally on the merits of the Gonzalez case? Is Google liable if its algorithms recommend terrorist videos to users? Is it liable if it monetizes those same videos with ads?

We filed for neither party in this case, and did not specifically address Google’s liability. In our view, when there is a legitimate claim that platforms played a role in enabling hate crimes, civil rights violations or acts of terror, victims deserve their day in court. To date, the overly broad interpretation of Section 230 has barred plaintiffs from being able to seek accountability through the courts. ADL believes family members of victims murdered in cases like this should have their day in court.

Social media companies like the defendants here should not be automatically immunized from responsibility for targeted recommendations of terrorist content or for allowing organizations like ISIS to use their platforms to promote terrorism and obtain recruits. That provision of Section 230 that provides near-blanket immunity from liability for platforms has been overly-broadly interpreted by courts and needs to be updated. At the same time, the provision of Section 230 that empowers platforms to moderate hate and harmful online content is crucial and should not change.

Does Section 230 immunize Google and other social media companies from liability more generally when they recommend third-party content to users?

The short answer is not automatically. In passing Section 230, Congress intended to immunize and facilitate the ability of internet providers to simply act as a “go-between” for posted communications in the interest of facilitating the free flow of ideas. Congress also made it possible for platforms to moderate and remove dangerous and offensive content.

The issue in this case is not liability for publishing the content from the original author of the post. The focus is instead on whether Google is protected from being held responsible when it affirmatively takes action to recommend certain content to its users, or when it targets or directs that certain content should be viewed by others visiting its site. That is no longer merely being a “go-between.” Rather, that is affirmatively taking action which increases the likelihood of a harmful result arising from the content that Google decides to recommend or suggest to thousands of its users.

Do you agree with the Zeran v. AOL decision that strongly shaped how courts interpreted Section 230?

The issue we have, as reflected in the answers above, is the broad interpretation of Section 230 that the Court applied in Zeran and the precedent that ruling set.

If the Court relies on the arguments in your brief to make its decision, how will it impact social media and the internet more broadly?

Section 230 has been interpreted far too broadly by the courts, in ways that are not in line with how the law is written and were not originally intended. The Supreme Court has the opportunity to make this clear. There may be reasons why, at the end of the day, the platforms should not be held liable in these cases. But wielding Section 230 to preclude any inquiry at all into platform involvement and to deny the plaintiffs their day in court is not acceptable, and is neither mandated by the wording of the law nor the original intent. It is the result of extraordinarily overbroad interpretations by the lower courts.

At the same time, the Court must be careful not to gut Section 230 in a way that leaves dangerous and perverse incentives against private platforms moderating online hate, disinformation and harassment or that makes it prohibitively expensive for anyone other than Big Tech to exist and flourish.

Authors

Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...

Topics