Danielle Allen and Ezra Klein on A.I. and Deliberative Democracy
By Wade Lee Hudson
In her April 14, 2023 interview on the Ezra Klein Show, Danielle Allen (whose new book is Justice by Means of Democracy) addresses how society might use modern technology to develop and strengthen “deliberative democracy structures that we have not yet set up.” Klein calls voting “a pretty thin level of participation” and envisions methods to enable people to “really be part of steering the ship of state.”
Klein argues, “You could have things like citizens assemblies and meetings, and in other ways, you could have a thicker kind of participation and advisory role for the public than you currently do.” Modern deliberative digital tools can enhance democracy, which Allen defines as “equal empowerment across a body of free and equal citizens.” She believes, “One of the greatest values of democracy is that together we can be much smarter than we can be as individuals.”
As I wrote in “Promoting Holistic, Systemic Transformation: A Scenario”:
Many deliberative democracy experiments have demonstrated that randomly selected lay citizens can engage in real deliberation to make wise decisions concerning public policy. Randomly chosen juries of peers in the criminal justice system exemplify faith in deliberate democratic decision-making. In some jurisdictions, civil grand juries composed of randomly selected ordinary citizens investigate the operations of local government officers, departments, and agencies.
Numerous experiments have shown the "wisdom of the crowd" — group averages are usually more accurate than individual estimates. Open betting markets that reflect the crowd's probability estimates are generally more accurate than opinion polls.
A real-world example is the Irish Citizens' Assembly, where randomly selected and broadly representative citizens led to the legalization of abortion in a heavily Catholic country. Popular democracy can be a powerful tool for advancing compassion and justice.
Allen reported that she’s had “a soft space in my heart” for deliberative democracy since her early study of ancient Greece, the original example of deliberative democracy. All white male citizens could participate “with orators standing on this big stone stage and somehow managing to project across space to the 6,000 people there.”
She sees deliberative tools and techniques as a way to “improve our existing structures of representation — the integration of these tools into existing structures of representation rather than their replacement.”
An example Allen cites is the vTaiwan project, “a four-stage online-offline consultation process for moving from issue to legislative enactment while building consensus among diverse stakeholders” in Taiwan. According to the Computational Democracy Project, vTaiwan uses Polis, which is a digital platform
for a conversation. Participants submit short text statements or comments (<140 characters), which are then sent out semi-randomly to other participants to vote on by clicking agree, disagree, or pass. Polis allows owners to create conversations that can seamlessly engage (currently) up to hundreds of thousands or (conceivably) millions of participants.
Allen served as co-chair of an American Academy of Arts and Sciences bipartisan commission that in June 2020 released its report, “Our Common Purpose Reinventing American Democracy for the 21st Century.” This collaboration proposed using elements of deliberation, such as participatory budgeting at the municipal level and “having members of Congress have access to deliberative assemblies and deliberative tools to improve their learning about what the issues are in our society.”
In the Klein interview, Allen addressed “improving that process of social discovery where we learn together about the shape of the hardships that we’re facing.” She said:
We see solutions that can emerge into visibility because of that collective work when we can’t get to them from any specific point of expertise or any single isolated position. So that’s really the picture…
We see a lot of evidence that people want it.Participatory budgeting is used around the world. People love it. People enjoy it. They are glad to help steer the direction of their municipal budget or identify their needs.
Nevertheless, there are barriers.
Right now, it’s really hard to participate. And it will always be hard to take time. But that’s where I think we need an economy that supports that. We need a workplace where we’re not talking just about work-life balance but also about work-life-civic balance so that people have the time to participate. And then we need non-opaque structures to participate in. We have a huge opacity problem, a real tax on people’s experience of the creative joy of participation.
Klein commented that the discussion about the lack of innovation “around small-d democratic governance” can be a “little airy” and asked Allen for her “best example for grounding this theory.” Allen cited the ’07-’08 Obama campaign, which produced DIY kits that empowered local campaign workers to “run their community groups — their community organizing spaces.”
She said:
People learned how to organize. They learned how to recruit other people to participate. They were empowered to use the time and space to name the issues that mattered most to them… And those conversations transformed people’s lives. The number of people who experienced that, who participated in that, and continue to be politically active today because of that is quite remarkable. So that body of people — what they experienced with tools put at their hands that were empowering and that then yielded a process of social discovery that yielded a major social transformation [especially, the Affordable Care Act]...
There are experiments all over the country of people trying to do similar things. And now the resources are amazing. They’re much better…
Human intelligence is plural. And human societies are pluralistic. And human flourishing is robust when that pluralism and plurality of forms of intelligence are supported. As a species, we maximize our capacity by activating and supporting our plurality, not by resolving ourselves into a singularity, a homogenizing force.
A different paradigm for technology development, Allen argued, could support and activate the many kinds of human intelligence and cultivate healthy forms of human pluralism. And she sees people who are experimenting on this issue. One example she referred to is the Plurality Institute, which
develops and experiments with plural technologies by convening researchers and collaborators from communities of practice. (They) create robust engagement directly with and between visionary researchers from a range of fields including computer science, political ethics, sociology, and government, who are building and experimenting with plural technologies.
Allen’s encouraged by folks designing
technologies that help people bridge lines of division. So instead of rewarding people for ever more outrageous things that reinforce division, you can structure different kinds of incentives into social media platforms and the like — so that those kinds of experiments are going on right now.
With the development of Artificial Intelligence language models such as GPT-4,
we are at a phase shift in terms of the capacity for these tools. And so things are moving very fast. There’s just a real imperative for all of us to figure out what the combination of societal norms, legal guardrails, and the like should be for the use of these technologies.
Referring to the “Our Common Purpose” report, Klein commented on its
very profound point that intelligence is collective. It is relational. It is situational. And then I felt like you could also make an argument that particularly large language models being trained on the corpus of internet text — that their vision of intelligence is very much collective, relational, pluralistic, in the sense that for good or for bad, whatever they know is this huge inhaling of everything that — not everybody but — huge numbers of people have written, have put out.
And their averaging is too simplistic of a way to put it. But they’re drawing connections and correlations and trying to predict what this collective would say next. So in some ways, aren’t these systems exactly what you’re calling for?
Allen replied:
Yes, the large language models have a pluralistic construction. And not only that, but they can do things. They can embody forms of intelligence that are not accessible to us. So in that regard, they are themselves already another yet different kind of intelligence. So we have in that regard again another meaning of the concept of plural intelligences.
So the question, in some sense, is, given that they have that capacity, can we help steer the development of technologies in ways that support this good feature of human existence? Or will the capacities to accelerate the creation of misinformation, accelerate fraud, destabilize institutions, and things like that be what leads with these technologies?
The invention of game-changing technology, such as gunpowder and nuclear power,
can do bad, and it can do good. And so the question is, how long does it take us as a world at this point — as an entire globe — to achieve the kind of conventions, structure, and regulation that ensure that this technology operates primarily in the direction of the good?
Klein asked Allen about her colleague Divya Siddarth, now a co-leader of the Collective Intelligence Project, which is an “an incubator for new governance models for transformative technology” that defines collective intelligence as “effective, decentralized, and agentic decision-making across individuals and communities to produce best-case decisions for the collective.”
He reported:
I spoke to her the other day, and they’re standing up these alignment assemblies. They’re trying to build a deliberative democracy platform on which people are going to be able to come together using the tools that have been designed for participatory budgeting and climate deliberative democracy, and so on, to try to develop senses of what people want in A.I. governance, in A.I. systems.
One of the concerns about deliberative democracy is it doesn’t scale that well… But it seems like a very promising space where you can have citizen assemblies that do have a little bit more of a role in trying to create a somewhat more legitimate sense of what people who go through a process on this would want. I’m curious to hear your reflections as somebody supporting (and advising) that project.
Allen replied that their work
building the alignment assemblies is one of the most important things happening now. So they are using digital tools to build citizen assemblies. They are seeking to build a structure that will provide meaningful public input to key steering decisions that the A.I. labs are making.
Of course, the challenge is figuring out what meaningful public impact means in this kind of context. You’ve got to make sure that you’re capturing perspectives from many different parts of the world. So there’s a necessary element of scale in it. But, of course, there’s also a real imperative for speed.
We’re all at this point, I guess, expecting ChatGPT-5 by the end of this calendar year. And so things are moving really fast. And we just don’t know what the impacts of these capabilities are likely to be. So the pressure is how do you design for speed and reasonable scale to support legitimacy in the near term? But, yes, I mean, they can get meaningful public input in a way that governments at this moment in time can’t do — can’t move fast enough to do.
Klein then raised the concern that democracy relies on the value of having people without technical expertise weighing in and controlling highly technical things. If the assembly says it wants to do something that the administrators who implement policies say is impossible, who’s right and who’s wrong there?
I think there’s always a suspicion that to have deliberative democracy means you’ll have people who don’t have enough expertise to have well-founded opinions gumming up the works or maybe coming up with really bad ideas. How do you think about that, or how do you answer that?
Allen replied:
That is a misconception of what deliberation can and should do. The first job of deliberation is to establish the steering direction with regard to values. What do we care about here? What matters to us and why? That is something you cannot use technocrats to do.
That is exactly a human question over and over again. It is the first human question. It is the human question that roils politics, of course. But that’s the thing that you need deliberative assemblies to address. And so if assemblies say, we want to know what’s going on, and then technologists say, well, you can’t if we have systems of this kind, then there’s a thing to negotiate.
Then that’s the work to do. What’s the relationship between that desire to know — that need to know — and the fact that technology can’t deliver it? Does that mean we try to call a halt to it? Does that mean we develop a different kind of bridge, et cetera?
The interview ended with this assertion, “The fact that there’s a discrepancy between the values people may articulate and what technologists think they can deliver is the beginning of the work, not the end.”
This fascinating interview highlights how humanity is at a crossroads, perhaps the most critical turning point in its history, for the species' survival may be at stake.