AI use in marketing is on the rise, particularly to augment customer experiences. One example is the deployment of digital conversational agents (“chatbots”) to address a variety of customer service needs including replacing human service agents on websites, social media, and messaging services. The chatbot market is forecasted to exceed $1.34 billion by 2024. The trend is towards companies anthropomorphizing (humanizing) their chatbots by giving their chatbots names, avatars, and personalities. Yet, people expect anthropomorphized chatbots to perform better than non-anthropomorphized chatbots which can lead to expectancy violations (high expectations of performance are not met). This particularly matters for angry customers. Angry customers are most likely to suffer when another’s performance falls short of expectations, because they most feel the need to achieve a desirable outcome. Anger also evokes an action-orientation, the tendency to hold others responsible, and to respond punitively. Thus, angry customers are much less satisfied with anthropomorphized chatbots compared to non-angry customers, who show a slight preference for anthropomorphized chatbots.
Access Classroom Lecture Slides
Related Marketing Courses:
Crolic, Cammy, Felipe Thomaz, Rhonda Hadi, and Andrew T. Stephen (2021), “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions,” Journal of Marketing.
Chatbots have become common in digital customer service contexts across many industries. While many companies choose to humanize their customer service chatbots (e.g., giving them names and avatars), little is known about how anthropomorphism influences customer responses to chatbots in service settings. Across five studies, including an analysis of a large real-world dataset from an international telecommunications company and four experiments, the authors find that when customers enter a chatbot-led service interaction in an angry emotional state, chatbot anthropomorphism has a negative effect on customer satisfaction, overall firm evaluation, and subsequent purchase intentions. However, this is not the case for customers in non-angry emotional states. The authors uncover the underlying mechanism driving this negative effect (expectancy violations caused by inflated pre-encounter expectations of chatbot efficacy) and offer practical implications for managers. These findings suggest it is important to both carefully design chatbots and consider the emotional context in which they are used, particularly in customer service interactions that involve resolving problems or handling complaints.
Special thanks to Holly Howe (Ph.D. candidate at Duke University) and Demi Oba (Ph.D. candidate at Duke University), for their support in working with authors on submissions to this program.
More from the Journal of Marketing