Facebook’s AI Can Detect and Intervene in Suicide

Facebook’s new “proactive detection” artificial intelligence (AI) technology will scan all posts for patterns of suicidal thoughts. When necessary, they will send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to intervene in concerning posts instead of waiting for users to report concerning posts, Facebook will decrease how long it takes to send help.


Netflix is on a similar upward trajectory as they emailed a user of concern after they watched The Office in it’s entirety in 5-10 days.

I am very pleased with Facebook’s and Netflix’s work to make the world a better place by preventing and intervening in suicide. Best practice includes requesting a CIT (crisis intervention team) trained police officer when calling 911 as there are often consequences to untrained emergency responders in mental health situations. With an understanding that Facebook’s AI has the ability to contact local first-responders, there are a number of cases in the past where people of color struggling with mental health have called 911 for help and then have been murdered by police when they show up on site (example here). We hope that this work is handled with care and everyone is treated with integrity.