Why Most Resist AI Companions
Harvard Business Working Paper No. 25-030
Harvard Business School Marketing Unit Working Paper No. 25-030
32 Pages Posted: 15 Jan 2025
Date Written: January 04, 2025
Abstract
Chatbots are now able to form emotional relationships with people and alleviate loneliness—a growing public health concern. Behavioral research provides little insight into whether everyday people are likely to use these applications and why. We address this question by focusing on the context of “AI companion” applications, designed to provide people with synthetic interaction partners. Study 1 shows that people believe AI companions are more capable than human companions in advertised respects relevant to relationships (being more available and non-judgmental). Even so, they view them as incapable of realizing the underlying values of relationships, like mutual caring, judging them as not ‘true’ relationships. Study 2 provides further insight into this belief: people believe relationships with AI companions are one-sided (rather than mutual), because they see AI as incapable of understanding and feeling emotion. Study 3 finds that interacting with an AI companion increases acceptance by changing beliefs about the AI’s advertised capabilities, but not about its ability to achieve the true values of relationships, demonstrating the resilience of this belief against intervention. In short, despite the potential loneliness-reducing benefits of AI companions, we uncover fundamental psychological barriers to adoption, suggesting these benefits will not be easily realized.
Keywords: generative AI, chatbots, artificial intelligence, algorithm aversion, loneliness
Suggested Citation: Suggested Citation