Artificial Intelligence Based Suicide Prediction

24 Pages Posted: 11 Feb 2019 Last revised: 8 Jan 2020

See all articles by Mason Marks

Mason Marks

Harvard University - Harvard Law School; Harvard University - Edmond J. Safra Center for Ethics; Gonzaga University - School of Law; Yale University - Information Society Project

Date Written: January 29, 2019

Abstract

Suicidal thoughts and behaviors are an international public health problem contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades, reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in artificial intelligence-based suicide prediction algorithms. This article describes these tools and the underexplored risks they pose to patients and consumers.

AI-based suicide prediction is developing along two separate tracks. In “medical suicide prediction,” AI analyzes data from patient medical records. In “social suicide prediction,” AI analyzes consumer behavior derived from social media, smartphone apps, and the Internet of Things (IoT). Because medical suicide prediction occurs within the context of healthcare, it is governed by the Health Information Portability and Accountability Act (HIPAA), which protects patient privacy; the Federal Common Rule, which protects the safety of human research subjects; and general principles of medical ethics. Medical suicide prediction tools are developed methodically in compliance with these regulations, and the methods of its developers are published in peer-reviewed academic journals. In contrast, social suicide prediction typically occurs outside the healthcare system where it is almost completely unregulated. Corporations maintain their suicide prediction methods as proprietary trade secrets. Despite this lack of transparency, social suicide predictions are deployed globally to affect people’s lives every day. Yet little is known about their safety or effectiveness.

Though AI-based suicide prediction has the potential to improve our understanding of suicide while saving lives, it raises many risks that have been underexplored. The risks include stigmatization of people with mental illness, the transfer of sensitive personal data to third-parties such as advertisers and data brokers, unnecessary involuntary confinement, violent confrontations with police, exacerbation of mental health conditions, and paradoxical increases in suicide risk.

Keywords: artificial intelligence, AI, suicide, mental health, machine learning, depression, health, privacy, big data

JEL Classification: I1, I12, I14, I18, K23,

Suggested Citation

Marks, Mason, Artificial Intelligence Based Suicide Prediction (January 29, 2019). 18 Yale Journal of Health Policy, Law, and Ethics 98 (2019), 21 Yale Journal of Law & Technology 98 (2019) , Available at SSRN: https://ssrn.com/abstract=3324874

Mason Marks (Contact Author)

Harvard University - Harvard Law School

1563 Massachusetts Avenue
Cambridge, MA 02138
United States

Harvard University - Edmond J. Safra Center for Ethics

124 Mount Auburn Street
Suite 520N
Cambridge, MA 02138
United States

Gonzaga University - School of Law

721 N. Cincinnati Street
Spokane, WA 99220-3528
United States

Yale University - Information Society Project ( email )

P.O. Box 208215
New Haven, CT 06520-8215
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
1,095
Abstract Views
6,671
rank
21,823
PlumX Metrics