Home

Gonzalez v. Google: A Perspective from the Center for Democracy and Technology

Ben Lennett / Feb 20, 2023

Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.

Ahead of oral arguments in Gonzalez v. Google, LLC at the Supreme Court next week, I sent a short questionnaire to gather perspectives and legal opinions of different organizations that filed briefs with the Court. It asked organizations for their perspective on the Gonzalez case, and the arguments by the Petitioner and the U.S. government that urged the Court to narrow Section 230 protections. I also asked for opinions on the Zeran v. AOL decision that largely shaped the U.S. courts’ interpretation of Section 230’s immunity protections.

Below are responses provided by Caitlin Vogus, Deputy Director of the Free Expression Project at the Center for Democracy and Technology (CDT). Read the Center’s full amicus brief here.

Why does this case matter to the organization you represent?

Gonzalez will be the first time the Supreme Court interprets Section 230, a vital law for protecting speech on the Internet. For decades, Section 230 has allowed users to speak freely online by shielding online service providers from liability for what their users say. Without it, many providers would over-remove users’ speech for fear of liability, making people less able to speak and receive information online. Section 230 also encourages providers to voluntarily moderate content by protecting them from liability for their decisions to leave up or remove content. Petitioners’ claim in Gonzalez threatens to erode these protections by arguing that Section 230 does not apply to claims based on the recommendation of content, which they argue are distinct from claims based on the display of content. This distinction is technologically arbitrary and unworkable. If the Court adopts Petitioners’ interpretation of the law, it will result in the suppression of online speech.

What is your position generally on the merits of the Gonzalez case? Is Google liable if its algorithms recommend terrorist videos to users? Is it liable if it monetizes those same videos with ads?

Section 230 should shield Google from liability for Petitioners’ claims. Every provider must choose what content to display and how to display it. A news aggregator may display links to recent news stories before older ones. A search engine may customize search results to show those it believes a user will find most relevant first. An online bookstore may display books on its homepage by the same author as books a user has previously bought.

Petitioners agree that Section 230 protects some display choices. But they argue that YouTube’s video recommendations aren’t shielded because they don’t provide users with information they are explicitly seeking (like a search engine) or because they are “targeted.” The Court should reject both arguments.

There is no technical basis to distinguish the ranking of search results from other forms of recommendation. Many search engines pick and order search results based on more than just the current query, using information like what a user previously clicked on or the user’s current location. Conversely, social media often ranks content based on a variety of explicit user signals, including “likes” and decisions to follow certain topics.

There is also no standard technical definition of a “targeted” recommendation. Providers use a variety of signals, alone and in combination, to choose what content to display and how to display it, including everything from who a user has “friended,” what content they’ve previously viewed, and even what device they’re using. Because it is not clear which signals and practices result in “targeted” recommendations, a holding based on “targeting” would leave providers uncertain about what Section 230 protects and cause them to restrict user speech.

Does Section 230 immunize Google and other social media companies from liability more generally when they recommend third-party content to users?

In general, Section 230(c)(1) bars most claims against providers based on their recommendation of third-party content to users. At their heart, these claims are about the publication of user speech, and, as a result, they fall squarely within Section 230(c)(1)’s liability shield. As described above, they are about providers’ choices about what content to publish and how to publish it. Because there is so much user-generated content posted online, it would be impossible for providers to publish it without putting it in some order to display to users. That ordering inherently results in the provider’s “recommendations” as to what content a user should view.

At the same time, Section 230 does not give providers blanket immunity. Some amici, like the ACLU, argued that Section 230 does not shield providers from claims based on their own discriminatory targeting of ads for housing or employment, for example. In those cases, the claims are based on the provider’s own choice to discriminatorily target ads (and not the result of targeting criteria provided by the advertiser). The claim is not based on the content of the ad or publication of third-party content, but rather on the provider’s own conduct in targeting the ad in a discriminatory way. As CDT and other amici also explained, Section 230(c)(1) also does not protect providers from claims based on content that they at least in part created or developed. It can be difficult to determine when a provider’s involvement in third-party content is sufficiently material such that Section 230(c)(1) does not apply, and that issue is not squarely presented in Gonzalez.

In short, even if Section 230 applies to claims like the Petitioners’ (as CDT argues it should), it may not apply in other cases raising other kinds of claims about the recommendation of third-party content. The exact delineation of when Section 230 shields providers from liability for the recommendation of third-party content will depend on many factors. This issue should be further developed in lower courts.

Do you agree with the Zeran v. AOL decision that strongly shaped how courts interpreted Section 230?

Yes. Zeran correctly recognized Section 230’s dual purposes: To encourage free expression and voluntary content moderation online.

As explained in Zeran, imposing tort liability on providers would chill users’ speech, because providers would respond by severely restricting user speech. Section 230 ensures providers don’t prohibit or aggressively remove user speech out of fear of liability.

As interpreted by Zeran, Section 230 also strongly protects voluntary content moderation. Zeran addressed the distinction between “publishers” and “distributors” that existed in the common law. Before Section 230, providers who edited the third-party content they published were at greater risk of liability than those who did not. Section 230 changed the common law rule to encourage providers to voluntarily moderate content, by prohibiting claims against providers based on their publication of speech, which includes the distribution of speech. Zeran also interpreted Section 230(c)(1) to shield providers from lawsuits based on both providers’ decisions to leave content up and take it down, i.e., to moderate content.

Zeran’s holding that Section 230 bars distributor liability also protects user speech. At common law, a distributor could be held liable for distributing speech it had been notified was illegal. Online, a notice-and-liability regime would mean that many providers would remove user speech upon notification that it was illegal, whether it was actually illegal or not. For example, a provider may remove a user’s truthful #MeToo post about being sexually harassed if her harasser falsely tells the provider that the post was defamatory. Section 230, as interpreted by Zeran, removes this threat of a “heckler’s veto.”

Some criticize Section 230’s strong protection against intermediary liability, as interpreted in Zeran and other courts, as leaving victims of online incidents without legal recourse. However, in addition to the limits on immunity discussed earlier, as well as the ability to sue the actual originator of the content, Section 230(e) provides important carve-outs, such as liability for violations of federal criminal law. In addition, providers’ business practices are governed by many areas of law unaffected by Section 230. Laws that neither directly restrict speech nor create incentives for providers to do so—such as comprehensive privacy laws and antitrust laws—are better suited to address specific concerns about providers’ actions.

If the court relies on the arguments in your brief to make its decision, how will it impact social media and the internet more broadly?

CDT urges the Court to consider users’ online free expression and access to information. Section 230 has allowed a diversity of online services for user speech to develop and thrive and ensured that people can speak freely online. The Court should not rewrite Section 230 and radically transform online speech governance.

Strong intermediary liability protections are critical for people from marginalized groups, who traditionally lacked forums for making their voices heard. Without Section 230, many online spaces like chatrooms, social networking sites, and blogging platforms would never have developed. People of color, LGBTQ+ people, women, people with disabilities, and others use these services to speak to each other and the world. Maintaining Section 230’s strong protections is vital to these services’ continued existence and the development of competitors.

Strong intermediary liability protections also decrease the risk that providers will remove users’ content for fear of liability. Again, this is especially important for people in marginalized groups. For example, without Section 230, Glassdoor may be more likely to remove posts that accuse an employer of discrimination. Meta may remove posts like Darnella Frazier’s video of police officers murdering George Floyd and other users’ discussion of his murder.

Shielding providers from lawsuits over their recommendations is necessary to protect free expression. Algorithmic recommendation of content facilitates access to information by helping users sort through the massive amounts of online content to find what’s relevant to them. And, as discussed previously, because almost every choice about how to display content could be thought of as a “recommendation,” users’ online speech will be under threat unless Section 230 applies to claims based on recommendations.

CDT’s amicus brief urges the Court to follow existing interpretations of Section 230. While algorithmic recommendations can contribute to problems such as the spread of disinformation, the dynamics underlying these concerns are best addressed in other ways—such as comprehensive privacy legislation and competition law—and in other forums, including Congress. Section 230 has promoted the development of a multitude of online services through which ordinary people can make their voices heard and find information. The Court should ensure that it continues to protect users’ vibrant and free online expression.

Authors

Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...

Topics