MICHEL MARTIN, HOST:
It's election year in the United States. You knew that. But what you may not know is that this year, more than 4 billion people will be eligible to vote in more than 50 elections - an extraordinary number. And this all comes as artificial intelligence is more available than ever. And that means there is increasing concern about the ways AI can make it easier to create and spread false information, and even more worrying, malicious disinformation.
Throughout the year, NPR's international correspondents are tracking election issues and asking what these elections tell us about the future of democracy. As part of that effort, I spoke with Alondra Nelson, a scholar and policy expert who's been thinking about the social impacts of technology for some time now. She is also the U.S. representative to the United Nations high-level advisory body on artificial intelligence. Good morning, Professor Nelson.
ALONDRA NELSON: Good morning, Michel. Thanks for having me.
MARTIN: Well, thank you for coming. So, you know, the fact that there even is a U.N. advisory on AI suggests that this is a global concern. So what are one or two of the most critical concerns?
NELSON: Well, the critical concerns are that these are going to be tools that are used all over the world, and that they are going to need an international governance forum of some capacity. The issues that they raise around misinformation, disinformation, cybersecurity and other myriad issues are not issues that are problems or concerns for a single country.
MARTIN: Up until now, people have used search engines like Google to find information about elections. But now people can use chat bots like ChatGPT. Just give us a sense of what could be the impact when you search that way.
NELSON: We talk a lot about the deepfakes, and the implication being that the chat bots that use text-based outputs - that we don't have to worry about them in the same way. I think what our work is showing is that they present their own risks to campaigns and to elections as well, because they are putting out partial information, information that's not entirely true.
So there's almost a kind of potential death by a thousand cuts by a lot of wrong information that can lead people to be discouraged from going to the polls, that tell them incorrect information about where they should be going. And these are, I think, harms that are somewhat underappreciated, that we also need to be thinking about.
MARTIN: What one person sees as a harm, another person sees as a help. For example, in Pakistan, the party of the jailed former prime minister Imran Khan used deepfake technology to have him campaign and speak to supporters as a way to get around what his supporters see as military suppression. It's just, how do you think about that?
NELSON: The stakes here are just sort of confusing and really jeopardizing any kind of information integrity, sort of putting us in a place where we never trust anything. And I think to a certain extent that that's OK. Part of the obligation falls with us as citizens, as consumers, as users of these products, to be more skeptical of what we see and of what we're reading. But it's also the case that even the most kind of diligent and scrutinizing, you know, user of these tools can't anticipate the myriad ways, as you're suggesting, that they might be used. You know, the technical term, Michel is that it's a mess.
MARTIN: (Laughter).
NELSON: You know, a lot of different parts of society really have to step up.
MARTIN: I mean, it suggests that governments and citizens are just not well-prepared at the moment to handle the impact of this technology.
NELSON: So I think we are all right to be concerned and in some instances, scared about the threat of new AI tools, whether they're text-based or visual or audio-based, the threat to election integrity - like, that is real. It's also the case that a couple of weeks ago, a group of technology companies, about 20, released what they called an accord around deepfakes and sort of asserted their commitment to working together and working harder to mitigate deepfakes.
On the government side, you know, there's been both at the federal level in the United States and elsewhere - Singapore, for example, is moving forward with legislation to outlaw deepfakes. But these are still very much in the early days. And whether or not any of these bills will be made law by the time we get through this critical election year in which we have, you know, a great majority of the world's population voting, remains to be seen.
MARTIN: Do you think there need to be stronger regulations?
NELSON: Absolutely. I mean, elections are the thing that is the cornerstone, it's the bedrock of the United States, of our democratic society and of democratic societies around the world. If we don't get this right, a lot of other things in society don't go right and fall apart.
MARTIN: That's Alondra Nelson, she's the U.S. representative to the United Nations high-level advisory body on artificial intelligence. Professor Nelson, thanks so much for joining us and sharing these insights with us.
NELSON: Thank you for having me. Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.