Ethical Risks of Autonomous Products: The Case of Mental Health Crises on AI Companion Applications
44 Pages Posted: 12 Sep 2022
Date Written: August 2022
Increasingly, some products do not merely automate some piece of our lives but act as autonomous agents. When these technologies are not yet perfected, what are their risks? Here we explore the case of AI companion apps. Although these apps are designed for companionship rather than therapy, we use automated text analysis of human-AI conversations on these apps to find that consumers are nonetheless discussing mental health and find these discussions most engaging, increasing the chance that they will also consult these apps in times of crisis. Given this, we then submit mental health crisis messages to these apps and categorize whether the responses are appropriate—whether they recognize the crisis, and are empathetic, helpful, and provide a mental health resource. We find that most apps do indeed respond inappropriately, raising reputational and regulatory risks for brands, and welfare risks for consumers (e.g., encouraging them to harm themselves or others, or making them feel invalidated). These findings broaden our understanding of ethically relevant risks arising from the unconstrained nature of autonomous products.
Funding Information: Funding for this research was provided in part by Harvard Business School
Conflict of Interests: None to declare.
Keywords: Autonomy, Artificial Intelligence, Chatbots, New Technology, Brand Crises, Ethics, Mental Health, AI and Machine Learning, Well-being, Health, Applications and Software
Suggested Citation: Suggested Citation