Home

Gonzalez v. Google: A Perspective from the Lawyers' Committee for Civil Rights Under the Law

Ben Lennett / Feb 20, 2023

Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.

Ahead of oral arguments in Gonzalez v. Google, LLC at the Supreme Court next week, I sent a short questionnaire to gather perspectives and legal opinions of different organizations that filed briefs with the Court. It asked organizations for their perspective on the Gonzalez case, and the arguments by the Petitioner and the U.S. government that urged the Court to narrow Section 230 protections. I also asked for opinions on the Zeran v. AOL decision that largely shaped the U.S. courts’ interpretation of Section 230’s immunity protections.

Below are responses provided by David Brody, Managing Attorney for the Lawyers' Committee for Civil Rights Under the Law. Read the full Lawyers' Committee amicus brief here.

Why does this case matter to the organization you represent?

Section 230 affects civil rights in two directions. First, if immunity to platforms is too expansive, Section 230 can impair the ability to enforce civil rights statutes when violations occur online--such as algorithms that discriminate in credit or employment opportunities, or online voter intimidation or harassment. However, on the other hand, if immunity under Section 230 is undercut, it would likely lead to greater censorship of people of color and other marginalized groups. The internet is essential to activists organizing racial justice movements and creators of color circumventing traditional gatekeepers.

What is your position generally on the merits of the Gonzalez case? Is Google liable if its algorithms recommend terrorist videos to users? Is it liable if it monetizes those same videos with ads?

We do not have a position on the merits of this specific case. The key question is: Would Google have done something illegal if the videos it was recommending were something innocuous, like cat videos? If yes, then Section 230 does not immunize it. Or is the liability premised on the harmfulness of the underlying content? If yes, Section 230 does immunize it. I am not sure that monetizing the videos with ads makes a difference; what matters is whether the claim is seeking to hold Google liable for its own actions or the underlying content it published. As discussed in the next answer, this does not mean a platform always gets a blank check to recommend content however it wants. There could be illegal ways of recommending content--such as discrimination in targeting or delivery.

Does Section 230 immunize Google and other social media companies from liability more generally when they recommend third-party content to users?

Section 230 says that a platform is immune when a plaintiff's claim is based on the content of the information being published. But a platform is not immune when the claim is based on the conduct or content of the platform itself. Consistent with the Sixth Circuit's decision in Jones v. Dirty World Entertainment, we think that a platform loses its Section 230 immunity when it is "responsible for what makes the displayed content allegedly unlawful." Think of it like a wrapped present. The platform has immunity for whatever is inside the box, but can be liable for the wrapping paper it puts around the box. If the wrapping paper is poisonous, they're liable. If the present is poisonous, they're not. This means that if there is something unlawful about the *recommendation* itself, irrespective of the content, then the platform can be liable--such as if the platform delivers housing ads on the basis of race even if those ads themselves are non-discriminatory.

Section 230 applies equally regardless of whether a human or an algorithm is the publisher or speaker. So imagine this hypothetical: A realtor sends an email to a client saying, "Hey, I think you would like these houses because you're Black, and this is a neighborhood where Black people should like to live." And the realtor includes some links to listings. The realtor gets Section 230 immunity for the content of the links themselves. But no one would say that the email content--which the realtor wrote themselves--should be immune. It's a violation of the Fair Housing Act and other anti-discrimination laws. However, what the human realtor is doing here is no different from what an algorithmic recommendation system does. The complexity changes, but the legal analysis does not. The law makes no distinction between manual and automated recommendations. So if there's no immunity when a human says it, then there should not be immunity if an algorithm functionally does the same thing.

If SCOTUS says that recommendations of third-party content always receive immunity, then almost anything that ever happens online could get immunity. And in particular, civil rights laws would almost never apply to online activity. A bad actor could say and do whatever they want, so long as they also link to some third-party content in their post or email and couch their language as a recommendation, and get immunity for their own unlawful conduct. And platforms could get immunity for engaging in online segregation and discrimination.

Do you agree with the Zeran v. AOL decision that strongly shaped how courts interpreted Section 230?

We don't have a position on Zeran. I don't think it is as important as many later cases, such as Roommates, Jones v. Dirty World, HomeAway v. Santa Monica, Lemmon v. Snap, and Erie Insurance v. Amazon. Most recently, there was a major decision from the Fourth Circuit that has, I think, the best and clearest explanation of how Section 230 is supposed to work: Henderson v. The Source for Public Data.

In Henderson, a plaintiff claimed that the defendant was a credit reporting agency under the Fair Credit Reporting Act, was failing to comply with FCRA's requirements, and that plaintiff lost job opportunities due to the defendant's inaccurate background check reports. Defendant asserted that because they operated online and republished third-party information (public records), they had Section 230 immunity. The Court disagreed and said Section 230 did not apply. First, for some claims, the Court said that Section 230 did not apply because the plaintiff did not seek to treat the defendant as a publisher for claims related to defendant's FCRA obligations like providing access and correction rights to consumers, which are not publishing activities. Second, the Court said that Section 230 did not apply to other claims because even though they did seek to treat the defendant as a publisher, the defendant materially contributed to the illegality. In this case,it allegedly edited the background reports and made them inaccurate.

If the court relies on the arguments in your brief to make its decision, how will it impact social media and the internet more broadly?

We argue that the Court should take a balanced approach that neither broadens nor decimates Section 230 immunity. The Court should adopt the three-part test widely used by the lower courts: (1) Is the defendant a provider or user of an interactive computing service? (2) Does the claim seek to treat the defendant as the publisher or speaker of someone else's content? (3) If yes, did the defendant materially contribute to what makes the content unlawful? In essence, is the defendant "responsible for what makes the displayed content allegedly unlawful?" (Jones v. Dirty World Entertainment)

But we also are clear that the Court needs to apply these tests strictly and carefully, because Section 230 should only extend as far as Congress intended, and no further. So, for example, algorithms that unlawfully discriminate in mortgage approvals, job applicant filtering, or facial recognition matching should not get immunity because claims would not turn on publishing, even if they may involve third-party content. And similarly, if a platform takes benign content and transforms it into something illegal (like delivering housing ads based on race), the platform has materially contributed to the illegality. Finally, we think the Court needs to say that it does not matter if an algorithm is facially "neutral"--there is no "neutral tools" test in the statutory text. No algorithm is truly neutral; its outcomes are imbued with values based on its design and training data. If an algorithm is procedurally indiscriminate but produces substantively discriminatory results, the platform should be accountable for that.

If the Court adopts our view, there may not be a massive change from the status quo. That is ok. We should not focus on Section 230 as the central nexus for fixing every problem on the internet. But we want to make sure it does not block other solution vectors or exacerbate problems further. Our position would clarify, however, that civil rights laws apply equally online as offline and that Section 230 immunity is robust but not absolute.

Authors

Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...

Topics