Home

Gonzalez v. Google: A Perspective from Free Press Action

Ben Lennett / Feb 19, 2023

Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.

Ahead of oral arguments in Gonzalez v. Google, LLC at the Supreme Court next week, I sent a short questionnaire to gather perspectives and legal opinions of different organizations that filed briefs with the Court. It asked organizations for their perspective on the Gonzalez case, and the arguments by the Petitioner and the U.S. government that urged the Court to narrow Section 230 protections. I also asked for opinions on the Zeran v. AOL decision that largely shaped the U.S. courts’ interpretation of Section 230’s immunity protections.

Below are responses provided by Matt Wood, Vice President of Policy and General Counsel at Free Press Action. Read Free Press Action's full amicus brief here.

Why does this case matter to the organization you represent?

Free Press Action is a technology advocacy group, but I long ago stopped adding the word "digital" to the phrase "digital civil rights" when describing what we do. That's because what happens in online spaces is just as real these days as what happens offline. We care deeply about racial justice and social justice. Protecting free expression online is a huge part of advancing our mission.

Section 230 lets platforms serve different communities. It empowers platforms to moderate, while giving them protection against liability in the first instance for what their users say. We have to get this balance right. The law must ensure that people can speak online without intermediaries and gatekeepers policing their every utterance. Yet it also must ensure that platforms take some responsibility — and have some agency — in preventing the spread of disinformation, bullying, and hate so often targeted at people of color, women, LGBTQIA+ individuals, and other impacted communities.

What is your position generally on the merits of the Gonzalez case? Is Google liable if its algorithms recommend terrorist videos to users? Is it liable if it monetizes those same videos with ads?

Our amicus brief in support of neither party studiously avoided taking a position on the merits of the case. The petitioners' claim turns ultimately on not just Section 230, but the application of the Anti-Terrorism Act, and we don't claim any expertise in that area of the law.

We suggested that Google might indeed be liable, even with Section 230 firmly in place, if the petitioners can successfully plead the ATA elements. We just don't take a position on whether the petitioners can do that.

What's more, we took a different tack from the petitioners in articulating our own theory for that potential liability. As I describe more fully in answer to your next question: We don't think that the use of algorithms or recommendations is the key to the question. Neither is whether or not the platform is monetizing the content.

All of those behaviors can be indicative of a platform's interactions with — and thus knowledge of — the content it is distributing. But even when a platform is recommending content, there may be Section 230 protections in place for that curating and editing function if the platform is unaware of the unlawful nature of what it's recommending.

Put more simply, we don't want Section 230 to ban algorithms or make their use much less possible. Yet we also think that a platform could be liable as a distributor of user-generated information, as discussed below, even if it is not recommending, promoting, or monetizing that content but still has knowledge of the harmful character of what it is hosting.

Does Section 230 immunize Google and other social media companies from liability more generally when they recommend third-party content to users?

It may much of the time, but we landed more in the middle and suggested that platforms could indeed be liable for unlawful content they distribute. In our view, this turns on the platforms’ knowledge of the unlawful nature of the third-party content they host, not on whether they recommend or promote it somehow.

We argued that Google's or other platforms' liability would not turn in such a case on their actions to recommend content, algorithmically or otherwise. We explained instead that Section 230(c)(1)'s ban on treating an "interactive computer service" as the "publisher or speaker" of user-generated content should not protect platforms from potentially being held liable as the distributor of such content when they knowingly distribute harmful material.

We cited and (somewhat surprisingly, perhaps) agreed with some of Justice Thomas's analysis. He has written that there was at common law, and still ought to be under Section 230, a distinction between the liability that publishers face and the higher — but not impossibly high — standard for holding a downstream distributor liable for disseminating unlawful content.

I'd never pretend that these are easy distinctions to draw at internet scale. Analogies to pre-internet defamation law, or to traditional publication and distribution channels, are not perfect. But our point in the brief was that courts can and should examine this question, and then draw these distinctions.

Platforms should not be expected to review in advance every one of the potentially hundreds of millions of new pieces of user-generated content they host every day. But neither should they be able to claim Section 230 as an absolute defense if they know about the unlawful nature of the content they're distributing yet fail to act on that knowledge. Letting platforms wash their hands of any responsibility simply by pointing to 230 and saying "we couldn't have known about the harms" — or worse yet, "in fact we did know, but there's nothing you can do about it" — is a bad outcome.

Do you agree with the Zeran v. AOL decision that strongly shaped how courts interpreted Section 230?

Both our amicus brief and congressional testimony we submitted a year ago explained the value we see in revisiting Zeran on a narrow but far-reaching ground. We'd prefer a legislative fix to a court decision, but since there may be a decision on this question here we opted to file.

Zeran suggested there is no meaningful distinction in the law — online, and possibly offline too — between liability standards for publishing third-party content and those for distributing it. The court found that imposing distributor liability on interactive computer services would treat them as publishers, and viewed distribution as merely a subset of publication. Any distributor liability thus depended on conceiving of distribution as a subsequent publication of the material.

That ruling paved the way for the broadest possible interpretation of the protections in Section 230(c)(1). It's not just Justice Thomas who has questioned that decision to collapse the two formerly separate liability standards together. Preeminent Section 230 scholar Professor Jeff Kosseff has also written that Zeran's reading of Section 230, and its restructuring of so much tort law that pre-dated the statute, was not the only plausible path.

We spent the largest part of our brief filling in the details on another path. We explained where Zeran went wrong by holding there was no way to distinguish between traditional "publisher" liability effectively prohibited by Section 230 for third-party content, and the higher standard for knowing distribution of content.

Earlier cases that prompted the passage of Section 230 erred in thinking that if a platform moderates any content, it must engage in intensive review of all content like a publishing house must. That wouldn’t be feasible or desirable in the internet context. We want platforms to be more open to user-generated content. That is a huge benefit.

But as we argued in our brief, Congress didn’t believe “there was something special about distributing content over the Internet that required relieving [] providers of the far more modest obligations of content distributors.” We acknowledged the “real differences between traditional booksellers and online video-hosting platforms,” but explained that “there is nothing in the text or history of Section 230 indicating that those differences led Congress to provide [platforms] carte blanche to knowingly distribute unlawful content that inflicts real, substantial harm on the public.”

If the Court relies on the arguments in your brief to make its decision, how will it impact social media and the internet more broadly?

This is a great but daunting question. A decision relying on our arguments could have a big impact. I obviously think that would be the right result, or else we'd not have written the brief we did, but it’s still a cause for anxiousness.

Adopting our view would see the Court clarify that platforms cannot be liable as publishers, yet still could potentially be liable under a higher standard for knowingly distributing unlawful or tortious user-generated content. Of course, this does not mean that platforms ever would be held liable in any specific instance. Plaintiffs would still need to plead their cases, and show that platforms actually played a role in causing alleged harm. Section 230 and the First Amendment would still protect platforms from many causes of action and theories of liability. But adopting our view would allow more of those suits to proceed instead of getting tossed out of court at the outset.

Free Press Action supported neither side in this case. We still staunchly support Section 230, but people on both sides of this issue may see us as opposed to them. That's not a comfortable thing, and we have terrific allies on both sides. We're convinced that this position we've taken as a bridge between them is a good and necessary one.

For a long time, defending the internet has been cast as requiring defense of every last inch staked out in Zeran. There is reason for concern about frivolous or malicious lawsuits against platforms and websites that host user content, making them spend more time and money in court.

Yet standing up for the internet should mean standing up for users too. As we explained in Congress in 2021, Section 230 encourages the open exchange of ideas as well as takedowns of hateful and harmful material. Without those paired protections, we’d risk losing moderation and risk chilling expression too. That risk is especially high for Black and Brown folks, LGBTQIA+ people, immigrants, religious minorities, dissidents, and all ideas targeted for suppression by powerful people willing to sue just to silence statements they don’t like.

But members of those communities can suffer catastrophic harms from platform inaction too. It’s not just in the courtroom that speakers must fear being silenced, harassed, and harmed — it’s in the chat room too, in social media, comment sections, and other interactive apps. Re-balancing Section 230 would provide more potential relief for those kinds of harms.

Authors

Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...

Topics