Can ‘we the people’ keep AI in check?

Technologist and researcher Aviv Ovadya isn’t sure that generative AI can be governed, but he thinks the most plausible means of keeping it in check might just be entrusting those who will be impacted by AI to collectively decide on the ways to curb it.

That means you; it means me. It’s the power of large networks of individuals to problem-solve faster and more equitably than a small group of individuals might do alone (including, say, in Washington). This is not naively relying on the wisdom of the crowds — which has been shown to be problematic — but the use of so-called deliberative democracy, an approach that involves selecting people through sortition to be representative (such that everyone in the population being impacted has an equal chance of being chosen), and providing them with an environment that enables them to deliberate effectively and make wise decisions. This means compensation for their time, access to experts and stakeholders, and neutral facilitation.

It’s already happening in many fields, including scientific research, business, politics and social movements. In Taiwan, for example, civic-minded hackers in 2015 formed a platform — “virtual Taiwan” — that “brings together representatives from the public, private and social sectors to debate policy solutions to problems primarily related to the digital economy,” as explained in 2019 by Taiwan’s digital minister, Audrey Tang in The New York Times. Since then, vTaiwan, as it’s known, has tackled dozens of issues by “relying on a mix of online debate and face-to-face discussions with stakeholders,” Tang wrote at the time.

A similar initiative is Oregon’s Citizens’ Initiative Review, which was signed into law in 2011 and informs the state’s voting population about ballot measures through a citizen-driven deliberative process. Roughly 20 to 25 citizens who are representative of the entire Oregon electorate are brought together to debate the merits of an initiative; they then collectively write a statement about that initiative that’s sent out to the state’s other voters so they can make better-informed decisions on election days.

These deliberative processes have also successfully helped address issues in Australia (water policy), Canada (electoral reform), Chile (pensions and healthcare) and Argentina (housing, land ownership), among other places.

“There are obstacles to making this work” as it relates to AI, acknowledges Ovadya, who is affiliated with Harvard’s Berkman Klein Center and whose work increasingly centers on the impacts of AI on society and democracy. “But empirically, this has been done on every continent around the world, at every scale” and the “faster we can get some of this stuff in place, the better,” he notes.

Letting large cross sections of people decide on acceptable guidelines for AI may sound outlandish to some, perhaps even impossible.

Yet Ovadya isn’t alone in thinking the solution is largely rooted in society. Mira Murati, the chief technology officer of the prominent AI startup OpenAI, told Time magazine in a recent interview, “[W]e’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies — definitely regulators and governments and everyone else.”

Murati isn’t worried that government involvement will slow innovation, or that it’s too early for policymakers and regulators to get involved, she told the outlet when asked about these things. On the contrary, as OpenAI has been saying for years, the time for action is today, not tomorrow. “It’s very important for everyone to start getting involved given the impact these technologies are going to have,” she said.

For now, OpenAI is taking a self-governing approach, instituting and revisiting guidelines for the safe use of its tech and pushing out new iterations in dribs and drabs.

The European Union has meanwhile been drafting a regulatory framework — AI Act — that’s making its way through the European Parliament and aims to become a global standard. The law would assign applications of AI to three risk categories: applications and systems that create an “unacceptable risk”; “high-risk applications” that would be subject to specific legal requirements; and applications not explicitly banned or listed as high-risk that would largely be left unregulated.

The U.S. Department of Commerce has also drafted a voluntary framework meant as guidance for companies, yet amazingly, there remains no regulation — zilcho — despite that it’s sorely needed. (In addition to OpenAI, tech behemoths like Microsoft and Google — burned by earlier releases of their own AI that backfired — are very publicly racing again to roll out AI-infused products and applications. Like OpenAI, they are also trying to figure out their own tweaks and guardrails.)

A kind of World Wide Web consortium, an international organization created in 1994 to set standards for the World Wide Web, would seemingly make sense. Indeed, Murati told Time that “different voices, like philosophers, social scientists, artists, and people from the humanities” should be brought together to answer the many “ethical and philosophical questions that we need to consider.”

Newer tools that help people vote on issues could also potentially help. OpenAI CEO Sam Altman is also a co-founder, for example, of a retina-scanning company in Berlin called WorldCoin that wants to make it easy to authenticate someone’s identity easily. Questions have been raised about the privacy and security implications of WorldCoin’s biometric approach, but its potential applications include distributing a global universal basic income, as well as empowering new forms of digital democracy.

Either way, Ovadya is busily trying to persuade all the major AI players that collective intelligence is the way to quickly create boundaries around AI while also giving them needed credibility. Take OpenAI, says Ovadya. “It’s getting some flack right now from everyone,” including over its perceived liberal bias. “It would be helpful [for the company] to have a really concrete answer” about how it establishes its future policies.

Ovadya similarly points to Stability.AI, the open-source AI company whose CEO, Emad Mostaque, has repeatedly suggested that Stability is more democratic than OpenAI because it is available everywhere, whereas OpenAI is available only in countries right now where it can provide “safe access.”

Says Ovadya, “Emad at Stability says he’s ‘democratizing AI.’ Well, wouldn’t it be nice to actually be using democratic processes to figure out what people really want?”