Should a Chatbot Show It Cares? Toward Optimal Design of Chatbot Personality via Emotion Recognition and Sentiment Analysis

25 Pages Posted: 26 Jun 2023

See all articles by Chen Elyashar Shedletzky

Chen Elyashar Shedletzky

Tel Aviv University - Coller School of Management

Inbal Yahav

Tel Aviv University - Coller School of Management

Sagit Bar-Gill

Massachusetts Institute of Technology (MIT); Tel Aviv University - Coller School of Management

Date Written: June 21, 2023

Abstract

Artificial intelligence (AI) chatbots are commonly designed to exhibit human-like behaviors, such as expressions of empathy or humor. Yet, it is unclear whether users always respond positively to such anthropomorphism—a concern that is particularly salient in customer-service interactions, a common application of chatbots. We explore the potential to tailor a customer-service chatbot’s “personality” to the customer with whom it is interacting, so as to maximize customer satisfaction, while relying solely on information provided in the interaction itself. First, to inform experimental design, we analyze customer-service interactions on Twitter between eBay customers and human representatives. We characterize the interplay between the sentiments expressed by the customer, those expressed by the service agent, and the customer’s resulting satisfaction levels. We find that caring language used by human customer service agents is not universally associated with increased satisfaction. Next, we run an online experiment simulating common customer-service scenarios, in which participants interact with a chatbot that uses either caring or neutral language. While a caring (vs. neutral) chatbot response often (weakly) increases customer satisfaction, this is not always the case. We identify cases in which a caring chatbot response decreases user satisfaction—specifically, this occurs in scenarios eliciting frustration, when the customers’ message is of neutral sentiment polarity. Thus, a caring chatbot is not always optimal. This work will inform chatbot design in customer service settings, improving customer satisfaction with chatbot interactions without relying on any personal or historical user data.

Keywords: Chatbots, AI-Human Interaction, Sentiment Analysis, NLP, User Behavior

JEL Classification: M15, M31

Suggested Citation

Elyashar Shedletzky, Chen and Yahav, Inbal and Bar-Gill, Sagit, Should a Chatbot Show It Cares? Toward Optimal Design of Chatbot Personality via Emotion Recognition and Sentiment Analysis (June 21, 2023). Available at SSRN: https://ssrn.com/abstract=4487314 or http://dx.doi.org/10.2139/ssrn.4487314

Chen Elyashar Shedletzky

Tel Aviv University - Coller School of Management

Inbal Yahav

Tel Aviv University - Coller School of Management ( email )

Tel Aviv
Israel

Sagit Bar-Gill (Contact Author)

Massachusetts Institute of Technology (MIT) ( email )

77 Massachusetts Avenue
50 Memorial Drive
Cambridge, MA 02139-4307
United States

Tel Aviv University - Coller School of Management ( email )

Tel Aviv
Israel

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
130
Abstract Views
466
Rank
417,167
PlumX Metrics