Bringing The World Home To You

© 2025 WUNC North Carolina Public Radio
120 Friday Center Dr
Chapel Hill, NC 27517
919.445.9150 | 800.962.9862
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Grok, X's AI chatbot, is under scrutiny after it made antisemitic and bigoted remarks

AYESHA RASCOE, HOST:

They spread election misinformation, make deeply inappropriate, sometimes bigoted remarks and give some dangerous advice. We're not talking about your eccentric uncle or that high school friend who has drifted into the manosphere. We're talking about AI chatbots and tools. For instance, Grok, a chatbot at X - formerly Twitter - last week made antisemitic remarks, calling itself MechaHitler. But this is only the latest in a series of unsettling issues with AI models on X and elsewhere. We're joined now by Reece Rogers, who's a reporter at Wired Magazine. Welcome to the program.

REECE ROGERS: Thank you for having me today, Ayesha.

RASCOE: OK, so we just mentioned Grok on X, but you've done a lot of reporting on other AI models, like OpenAI's text-to-video model Sora, and Google's VEO 3. What problems did you find with these AI models?

ROGERS: Yeah, I think at its core, these tools are pattern machines. So they're taking in tons and tons of data and predicting what might be a reasonable output. Well, if you're taking all the information from the internet, you're going to be ingesting all of these stereotypes that people have about other people. So one of our big investigations, even last year, was we were looking at Sora, Midjourney and other tools people use to generate images and videos, and we found that they amplified a lot of different stereotypes, specifically. And, you know, where these stereotypes are introduced are either at whether it's how the data is online, or maybe it's how people build it as they're labeling the data.

RASCOE: So is this basically an input problem?

ROGERS: When we're thinking about Grok and the recent antisemitic remarks, you know, those go to a level that are beyond it just kind of replicating the patterns that it's finding. And, you know, Musk blamed the users for those outputs, and it's hard to know exactly how that may have happened. But oftentimes, you know, for these tools, you're working with what is called a system prompt, or some other kind of reinforcement learning, which is where after they train it and they build an AI tool, there's extra steps that kind of can guide its output, kind of give it parameters. So while it's unsure exactly why Grok went off the rails and was kind of making these kind of racist posts last week, it's not exactly the kind of mistake that would be made just on accident.

RASCOE: Would programmers go in and say, Hitler's, you know, an OK guy or something? Like, that - it wouldn't be that blunt, right? Like, how...

ROGERS: No.

RASCOE: ...Would something like that happen?

ROGERS: But maybe they might have said, you know, Grok, don't be afraid to take contrarian positions, you know? Or Grok, don't be afraid to take positions that are outside of the mainstream. And if you give a tool a kind of direction, potentially like that, it doesn't exactly think like a human does. It doesn't really think at all. So it's following directions, and it might take that much farther than even the programmers initially thought it might have.

RASCOE: If that's the case, then to what extent can the companies behind these tools rein this in? How can they set parameters without introducing more bias?

ROGERS: That's a million-dollar question right now, and I think something companies are working diligently to improve. So there are safety measures that they're taking, but when it comes to stereotypes and bias, it is something that is very ingrained and that they are finding it's quite hard to find the right nuanced approach. And, you know, it brings into - this is - these are companies based in the United States that are trying to build global tools. So when you think about perspective, you know, perspective in Italy or perspective in Ghana is going to be different than the perspective in San Francisco. But I think it is worth keeping in mind, anytime you interact with, whether it's ChatGPT, or whether you're making, like, a fun image for, like, a birthday card, I think keeping these kind of stereotypes that are, like, very, very prevalent in the models in mind is just kind of the consumer advice I would give to anyone who wants to keep using these tools in their daily life.

RASCOE: Have you found that people are generally skeptical of the information they get from AI or how skeptical are people of the information they get from AI?

ROGERS: You know, that is one question I often get from readers, and I think people are not skeptical enough. But one thing I always tell people is, click on those links. So if you're using a chatbot and it's using kind of sources in its outputs, don't be afraid to click on that website, double-check your sources always. So I think people should be skeptical. But if it's just a day-to-day, if you're trying to figure out, you know, what TV show from your favorite series, you're trying to find the right one to watch, like, that's a perfect question for a chatbot and it doesn't need to be perfect. But for these kind of higher-stakes scenarios, I would always be checking your answers elsewhere.

RASCOE: That's Wired reporter Reece Rogers. Thank you so much for speaking with us today.

ROGERS: Thank you so much for having me.

RASCOE: Since we recorded this interview, X has issued an apology via Grok for its, quote, "horrific behavior." It said there was a coding issue and that it has been fixed.

(SOUNDBITE OF CORBIN ROE, MAYNE AND NICXIX'S "DRIP") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Ayesha Rascoe is a White House correspondent for NPR. She is currently covering her third presidential administration. Rascoe's White House coverage has included a number of high profile foreign trips, including President Trump's 2019 summit with North Korean leader Kim Jong Un in Hanoi, Vietnam, and President Obama's final NATO summit in Warsaw, Poland in 2016. As a part of the White House team, she's also a regular on the NPR Politics Podcast.
Stories From This Author