A year ago, Facebook started using artificial intelligence to scan people's accounts for danger signs of imminent self-harm.
Facebook Global Head of Safety Antigone Davis is pleased with the results so far.
"In the very first month when we started it, we had about 100 imminent-response cases," which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.
"To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)
Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people's replies.
"Maybe like, 'Please don't do this,' 'We really care about you.' There are different types of signals like that that will give us a strong sense that someone may be posting of self-harm content," Davis says.
When the software flags someone, Facebook staffers decide whether to call the local police, and AI comes into play there, too.
"We also are able to use AI to coordinate a bunch of information on location to try to identify the location of that individual so that we can reach out to the right emergency response team," she says.
In the U.S., Facebook's call usually goes to a local 911 center, as illustrated in its promotional video.
Mason Marks isn't surprised that Facebook is employing AI this way. He's a medical doctor and research fellow at Yale and NYU law schools, and recently wrote about Facebook's system.
"Ever since they've introduced livestreaming on their platform, they've had a real problem with people livestreaming suicides," Marks says. "Facebook has a real interest in stopping that."
He isn't sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI's accuracy rate. How many of those 3,500 "wellness checks" turned out to be actual emergencies? The company isn't saying.
He says scrutiny of the system is especially important because this "black box of algorithms," as he calls it, has the power to trigger a visit from the police.
"It needs to be done very methodically, very cautiously, transparently, and really looking at the evidence," Marks says.
For instance, Marks says, the outcomes need to be checked for unintended consequences — such as a potential squelching of frank conversations about suicide on Facebook's various platforms.
"People ... might fear a visit from police, so they might pull back and not engage in an open and honest dialogue," he says. "And I'm not sure that's a good thing."
But Facebook's Davis says releasing too many details about how the AI works might be counterproductive.
"That information could could allow people to play games with the system," Davis says. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."
The ethics of using an AI to alert police to people's online behavior may soon go beyond suicide-prevention. Davis says Facebook has also experimented with AI to detect "inappropriate interactions" between minors and adults.
Law professor Ryan Calo, co-director of the University of Washington's Tech Policy Lab, says AI-based monitoring of social media may follow a predictable pattern for how new technologies gradually work their way into law enforcement.
"The way it would happen would be we would take something that everybody agrees is terrible — something like suicide, which is epidemic, something like child pornography, something like terrorism — so these early things, and then if they show promise in these sectors, we broaden them to more and more things. And that's a concern."
There may soon be a temptation to use this kind of AI to analyze social media chatter for signs of imminent crimes — especially retaliatory violence. Some police departments have already tried watching social media for early warnings of violence between suspected gang members, but an AI run by Facebook might do the same job more effectively.
Calo says society may soon have to ask important questions about whether to allow that kind of monitoring.
"If you can truly get an up-or-down yes or no, and it's reliable, if intervention is not likely to cause additional harm, and is this something that we think it is important enough to prevent, that this is justified?" Calo says. "That's a difficult calculus, and I think it's one we're going to have to be making more and more."
If you or someone you know may be considering suicide, contact the National Suicide Prevention Lifeline at 1-800-273-8255 (En Español: 1-888-628-9454; Deaf and Hard of Hearing: 1-800-799-4889) or the Crisis Text Line by texting 741741.
LAKSHMI SINGH, HOST:
For the last year, Facebook has been running a new system that automatically scans people's accounts for signs of suicide risk and alerts the police. As NPR's Martin Kaste reports, it raises new questions about social media companies intervening in the real-world lives of their customers.
MARTIN KASTE, BYLINE: Facebook's using artificial intelligence to find cases of people who seem about to harm themselves. The AI is learning which kinds of online chatter it should take seriously. For instance, if a person is streaming a live video, and the replies to that video start to sound ominous...
ANTIGONE DAVIS: Maybe, like, please don't do this. We really care about you. There are different types of signals like that that will give us a strong sense that someone may be posting self-harm content.
KASTE: That's Antigone Davis, Facebook's global head of safety. When the software flags someone, she says Facebook staffers decide whether to call the local police. And AI comes into play there, too.
DAVIS: We also are able to use AI to coordinate a bunch of information on location to try to identify the location of that individual so that we can reach out to the right emergency response team.
KASTE: In this first year of the system's operation, that's happened now about 3,500 times, Facebook says. In other words, about 10 times a day, Facebook is calling police or first responders somewhere in the world to check on someone based on an initial alert produced by the monitoring software. This is a Facebook promotional video with testimonials from police in upstate New York talking about getting one of those alerts.
(SOUNDBITE OF ARCHIVED RECORDING)
JAMES GRICE: We did find her. She admitted to taking medication, and we were able to get her to a local hospital.
JOSEPH A. GERACE: There's no doubt in my mind that this saved her life.
KASTE: The new system has been welcomed by suicide prevention advocates, especially given the rising suicide numbers of recent years. But Mason Marks is more cautious.
MASON MARKS: I don't know if Facebook should be doing this.
KASTE: Marks studies the intersection between medicine, privacy and artificial intelligence. He says he gets why Facebook is doing this. The company has been under pressure, especially after some people used the livestream video to broadcast suicides and self-harm. But he wonders whether using an AI to flag cases for police attention is the right solution.
MARKS: It needs to be done very methodically, very cautiously, transparently and really looking at the evidence.
KASTE: Marks doesn't like the fact that Facebook is holding back some key details. For instance, how accurate is this? How many of those 3,500 calls actually turned out to be real emergencies? He says outsiders have to be able to evaluate this system and its potential side effects.
MARKS: People may also learn that if they do talk about suicide openly that they might fear a visit from police, so they might pull back and not engage in an open, honest dialogue. And I'm not sure that's a good thing.
KASTE: And this kind of AI-based monitoring may soon go beyond suicide prevention. Again, Facebook's Antigone Davis.
DAVIS: I think more and more we will see AI used in the context of safety and in the context of potentially preventing harm.
KASTE: For instance, using AI to detect inappropriate interactions online between adults and minors. She says that's also something Facebook is experimenting with. Law professor Ryan Calo says this would be the typical pattern for how a new monitoring technology would expand into law enforcement.
RYAN CALO: The way it would happen would be we would take something that everybody agrees is terrible. It would be something like suicide, which is epidemic, something like child pornography, something like terrorism - so these early things. And then, if they showed promise in those sectors, we broaden them to more and more things. And that - you know, that's a concern.
KASTE: Calo was co-director of the tech policy lab at the University of Washington, and he specializes in technology and privacy. He says we need to think about the possibility that this kind of AI will be used more broadly - say, to monitor social media chatter for signs of impending violence between people. Would that be desirable?
CALO: If you can truly get an up or down, yes or no, and that's reliable, if intervention is not likely to cause additional harm. And then this is something that we think is important enough to prevent that this is justified. And so that's a difficult calculus, and it's one that I think we're going to be making more and more.
KASTE: Especially if tech companies continue to show a willingness to call the police because of something an AI spotted online.
Martin Kaste, NPR News. Transcript provided by NPR, Copyright NPR.