Bringing The World Home To You

© 2024 WUNC North Carolina Public Radio
120 Friday Center Dr
Chapel Hill, NC 27517
919.445.9150 | 800.962.9862
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Can You Believe Your Own Ears? With New 'Fake News' Tech, Not Necessarily

New technologies for creating faked audio are evolving quickly in the era of active information campaigns and their use of "fake news."
Stuart Kinlough
/
Ikon Images/Getty Images
New technologies for creating faked audio are evolving quickly in the era of active information campaigns and their use of "fake news."

Soon, we might not be able to believe our own ears.

New technologies for creating faked audio are evolving quickly in the era of active information campaigns and their use of "fake news."

This has serious repercussions for politics: Influence-mongers could create fake clips of politicians to undermine them — or politicians could deny they said things they were really recorded saying, calling it fake audio.

A Montreal startup called Lyrebird has released a product which allows users to create an audio clip of anyone saying anything. Here's the company using a fake clip of former President Barack Obama to market their technology.

It's not perfect — it sounds like Obama coming in over a bad phone connection. But it does sound like Obama.

"They want to use this technology to change the life of everyone that lost their voice to a disease by helping them recover this part of their identities. Let's help them achieve this goal," the Lyrebird-created version of Obama says.

Lyrebird also created a series of clips of President Trump reading aloud some of his tweets.

Lyrebird isn't the only company which has developed a voice-editing technology.

Adobe is in the early stages of developing its own program, called Adobe VoCo, which would be able to mimic someone's speech based on text. The company pitched it as Photoshop for audio.

"It remains an early-stage research project for Adobe and is not available to the public today. In fact, one of the reasons we provide an early look at technologies like Project VoCo is so that we can engage our community on how best it can be developed, what its potential uses might be, and what safeguards might be put in place to discourage its misuse," the company said.

There are a limited number of distinct sounds in the human voice. These technologies process them and how individuals pronounce them — through a process called machine learning. The implications are huge, observers say.

"I don't think it's an overstatement to say that it is a potential threat to democracy," said Hany Farid, the chair of computer science at Dartmouth College.

Tale of the tape

The ordinary course of a political campaign could provide plenty of grist for the creation of fake audio.

In 2008, for example, rumors circulated about a tape in which Michelle Obama used a derogatory term for white people. There's no evidence that it existed. But using these technologies, a fake could be made.

The growing awareness of influence campaigns and fake recordings also could also give public figures a chance to call real audio a forgery. Farid recalled the Access Hollywood tapes in the 2016 campaign on which Trump was recorded boasting about forcing himself on women.

"Eighteen months ago when that audio recording of President Trump came out ... if that was today, you can guarantee that he would have said it's fake and he would have had some reasonable credibility in saying that as well," Farid said.

Yoshua Bengio, an adviser to Lyrebird, touts such positive uses as restoring voices to those who have lost them to illness.

"I think it's better if companies ... to do it in a way that's going to be beneficial for society," he said. "Try to put as much as possible of the safeguards that I think are necessary and also raise the awareness rather than doing these things in secret."

The Pentagon considers the potential danger from fake audio and video to be urgent enough that it has invested time and resources into coming up with a fix.

The Defense Advanced Research Projects Agency has a "media forensics" project aimed at developing the capability to automatically check images, video and audio for authenticity.

David Doermann, who runs the program, imagined a disaster scenario: a collage of fake audio, video and photos that come together in a mass misinformation campaign — and create the impression of a major event that never even occurred.

"That might lead to political unrest, or riots, or at worst some nations acting all based on this bad information," he told NPR.

Ultimately, these technologies mean the public must be far more skeptical in cases of disputed reports, like those that may become more common in the 2018 elections, observers say.

"It wasn't that long ago that you could easily assume that if you have photographic evidence of something that can be used as evidence and no one's going to question it," said Mark Kozak, an engineer who works for PAR Government Systems, a company that creates falsified media that Doermann's team uses to develop their platform.

"I think people have to learn to be questioning everything that you hear and see."

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tim Mak is NPR's Washington Investigative Correspondent, focused on political enterprise journalism.
More Stories