Bringing The World Home To You

© 2024 WUNC North Carolina Public Radio
120 Friday Center Dr
Chapel Hill, NC 27517
919.445.9150 | 800.962.9862
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Instagram To Flag Hateful Comments Before You Send Them

Instagram is rolling out a feature that will notify users when their comments may contain harmful content before others see them.
Chandan Khanna
/
AFP/Getty Images
Instagram is rolling out a feature that will notify users when their comments may contain harmful content before others see them.

Instagram is rolling out a feature that will urge users to think twice before posting hateful comments, in an effort to minimize cyberbullying on the massive social media platform.

The new feature uses artificial intelligence to screen content and notify users if their post may be harmful or offensive. Users will see a message: "Are you sure you want to post this?" They will then have the option to remove or change the comment before anyone else is able to see it.

Early tests of this feature found that some users are less likely to post harmful comments once they've had a chance to reflect on their post, Instagram chief Adam Mosseri wrote in a blog post.

Gmail has a similar feature that gives users 30 seconds to cancel an email after pressing send.

Other social media platforms have made attempts to monitor the type of content allowed on their platforms. Twitter has started to flag hateful or offensive tweets from politicians, and Facebook has banned some white supremacists and other accounts for hateful or offensive posts. But there is no hard-and-fast rule for what these platforms are expected to restrict.

Monitoring harmful content on social media is challenging. Justin Patchin, co-director of the Cyberbullying Research Center, says he works with different platforms that are trying to find a solution to this problem.

With massive amounts of content being created every second, Instagram is just one of the companies attempting to use AI to monitor posts. Both Facebook and Twitter have tried to use the technology in the past. But AI monitoring comes with challenges, and the algorithms often have a hard time interpreting slang and the nuances in different languages.

Instagram's latest feature is different from previous attempts by big social platforms to prevent cyberbullying because it uses AI to warn users but ultimately allows them to make the decision on what to post.

"The transparency here is helpful to those who have wondered why these big social media companies aren't doing more technologically to address bullying," Patchin said.

Instagram is the first big platform to try this method of preventing hateful content from circulating in its app. However, it's a similar concept to the app created by Trisha Prabhu in 2013. The then 13-year-old created a social platform called ReThink, which also alerts users when their message may be offensive. ReThink was praised for its innovation, but Patchin says solutions need to be incorporated into already widely trafficked platforms to be the most effective.

Patchin says these big social companies are moving in the right direction and are getting closer to finding a method for monitoring harmful content and cyberbullying.

"Companies have devoted a lot of energy to refining these systems, and they're getting better every year," he said. "They do have a responsibility and obligation to lead the way and at least experiment with these kinds of technologies."

Instagram has plans to continue beefing up its safety features and will soon introduce a "restrict" feature, which allows users to filter content from specific accounts without blocking them. Instagram's Mosseri wrote in the blog post that the company decided to add that feature after users said they were worried that blocking accounts that were posting offensive comments on their page would lead to retaliation.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

More Stories