Bringing The World Home To You

© 2024 WUNC North Carolina Public Radio
120 Friday Center Dr
Chapel Hill, NC 27517
919.445.9150 | 800.962.9862
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI deepfakes could advance misinformation in the run up to the 2024 election

MILES PARKS, HOST:

For the past few weeks, the tech world has been buzzing, focused on the new frontier of artificial intelligence. Developments, as you may have heard, are coming fast and furious. OpenAI debuted the latest version of its ChatGPT chatbot, and Google released its own competitor, Bard. But at the same time, some high-profile fake videos known as deepfakes have been spreading online. Here to wade through all of this is NPR's Shannon Bond. Hi, Shannon.

SHANNON BOND, BYLINE: Hi, Miles.

PARKS: So I reported extensively on voting and misinformation in the last few election cycles. And every year, experts say deepfakes are the thing we have to worry about. Hasn't really come to pass to be a big deal. Why is this time different?

BOND: Well, I think first of all, these tools have just improved a lot, so the technology is better to create kind of more realistic fake content. And crucially, these are now apps that are available, like, to the public. So it's now kind of in the hands of everyday internet users to be able to create very plausible, realistic text, videos, audio, pictures. I spoke with Ethan Mollick, who's a professor at the University of Pennsylvania's Wharton School. He's really excited about AI, but he's also kind of, you know, trying to figure out, what are the boundaries? - and is a bit nervous about this potential. And so he decided he wanted to see how easy it would be to fake himself. He used an app that can clone audio, and so he was able to make an imitation of his own voice. And then he used another app where you upload a picture, and it basically turns that picture into a video. And in eight minutes, at a cost of just $11, he was able to make a deepfake video of himself. And so, you know, this is raising alarm bells of how these tools can be misused in the wrong hands.

PARKS: Are we seeing that already play out online right now?

BOND: We are. One of the really striking examples in the past couple weeks was a video that was made by the right-wing internet influencer Jack Posobiec. He's probably best known for promoting the Pizzagate conspiracy theory. And he and his team made a fake video purporting to show President Joe Biden announcing a draft to send American soldiers to Ukraine.

(SOUNDBITE OF AI-GENERATED RECORDING)

COMPUTER-GENERATED VOICE: (Imitating Joe Biden) The illegal Russian offensive has been swift, callous and brutal.

BOND: Now, Posobiec was clear when he presented this that this was AI, and it was not real, but he also kind of framed this, you know, in a way that plays to people - his audience's expectations. So he said, you know, this was AI, but it was a preview of something that hadn't happened yet but that, you know, could happen. Many people then went on to share that video without any kind of disclaimer that it's not really Biden. It's not real.

And just this past week, you know, many people were waiting to see if former President Donald Trump was, in fact, going to be indicted. One of them was an open-source researcher who, you know, while he was waiting around to see if this is going to happen, you know, turned to an image generator where you just, you know, put in a couple lines of texts, and it'll create a realistic photo or image of what you're asking for, and he used that to imagine Donald Trump getting arrested and was able to create very plausible photos showing Trump surrounded by police. He posted these online saying that he had had them created. But again, very quickly, they were spread much more widely without any reference to the fact that they were not real. And I think that really shows how this could be used to, you know, manipulate or mislead in sort of breaking news environments.

PARKS: Well, I'm already playing out a lot of the worst-case scenarios in my head, but what are the people you're talking to worried about when it comes to the 2024 election?

BOND: I mean, there's a real concern that these kind of tools, you know, whether it's video, audio, text - this could really drive down the cost of creating propaganda, you know, conducting influence campaigns. I spoke with Gary Marcus, who's a cognitive scientist at NYU and studies AI, and here's how he put it.

GARY MARCUS: Anybody who wants to do this stuff, either to influence an election or because they want to sell stuff or whatever the reason they want to do it - they can make more of it at very little cost, and that's going to change their dynamic. Anytime you make something that was expensive cheaper, that has a huge effect on the world.

BOND: And so you can imagine that to conduct an influence operation, you know, you don't have to have the resources of, you know, like, a state-sponsored troll farm. It becomes in reach to many more people. There are concerns about how this is going to affect 2024. I mean, I think certainly we should be prepared to see lots of deepfake videos of figures like President Biden, you know, of Donald Trump, Ron DeSantis, whoever else is in the mix. There may be even greater risk for less well-known figures, people on down-ballot races, you know, people running for local elections, like school board, city council, who may not have as many resources to kind of push back against fake or manipulated content. And it's not just elections that are at risk. There's lots of ways that generated text in particular could be used to manipulate public conversation by spamming public comments at an agency or, you know, writing constituent mail to members of Congress.

PARKS: Have the companies themselves shown any interest in trying to mitigate some of these problems?

BOND: Many of these apps, at least from the big companies, you know, do have some guardrails and some limits, but people are quite good at getting around them. And you also have, you know, this question about the social media companies and how they're treating this. Most of them do have policies on manipulated or synthetic media, but there are questions about how they enforce this. There's no agreement on how generated synthetic content should be labeled or watermarked in some way, and there's certainly no policy-level resolutions in sight here. So I think it's going to be left very much up to us, the public, and certainly us as journalists to try to figure this out.

PARKS: NPR's Shannon Bond, thank you so much.

BOND: Thanks, Miles. Transcript provided by NPR, Copyright NPR.

Miles Parks is a reporter on NPR's Washington Desk. He covers voting and elections, and also reports on breaking news.
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.
Stories From This Author