Facebook Uses Artificial Intelligence to Help Prevent Suicide But Cannot Tell If It Works

Cecile Janssens
Invisible Illness

--

hoto: Suicide by Nick Youngson CC BY-SA 3.0 ImageCreator

Since 2017, Facebook has been using artificial intelligence to scan posts, comments, and likes for information that suggests that someone might take their own life. Alarming posts are reviewed by ‘a team of thousands of people around the world’ who can respond within minutes and call the police when ‘it’s determined that there may be imminent danger of self-harm.’

Facebook does not disclose how the screening works, why posts are flagged as alarming, and how reviewers decide that the police should be informed. The company reports that it made 3,500 calls to the police in the last year, but does not know how many suicides were prevented. In the New York Times, the company said that it doesn’t track the outcomes of its calls for ‘privacy reasons.’

I study the genetic prediction of common diseases and regularly review the algorithms behind direct-to-consumer genetic tests. I find it a red flag when, in the absence of scientific evidence, companies don’t disclose how well their tests work and how useful the test results are.

As it is far from obvious that imminent suicide attempts can be unambiguously identified from Facebook postings, it cannot be taken for granted that the company’s suicide prevention ‘works.’ Here are two questions the company…

--

--

Cecile Janssens
Invisible Illness

Professor of epidemiology | Emory University, Atlanta USA | Writes about (genetic) prediction, critical thinking, evidence, and lack thereof.