Skip to Content Skip to Footer
Strategies for Leveraging AI in the Customer Experience

Strategies for Leveraging AI in the Customer Experience

Aaron Garvey and Luca Cian

Companies today are tapping into artificial intelligence (AI) more than ever, using it to interact with consumers in various ways. In contrast to other forms of technology, AI is a computer-based system capable of performing and integrating multiple tasks that otherwise require human intelligence. AI technologies currently serve as customer-facing agents of the firm (e.g., online chatbots, service robots), as core attributes of interactive products (e.g., Siri, Alexa), and are integral in the new product development process (e.g., social media content algorithms). Benefits affecting all the steps in the customer journey (e.g., fewer costs, higher ability to meet customer needs and process complex information) have driven this trend (Puntoni et al. 2021). However, simply having AI technology doesn’t automatically lead to a better customer experience and could, in some cases, deter customers. Although it is not yet clear if consumer responses to AI differ from responses to technology more generally, recent research has shown that firms should be judicious in how they incorporate and depict their use of AI in consumer-related contexts.

Below, we provide three research-based strategies that can help firms improve customer experience when using AI-based services and products (see Kim et al. [2023] for a more comprehensive review of the academic literature in this area). Our first section focuses on the design of AI interfaces; our second considers whether to implement AI systems that replace humans; and our third looks at AI mistakes after implementation, consumer reactions to these mistakes, and how to respond as an organization.

Advertisement

Download Article

Get this article as a PDF

Designing AI Interfaces: Giving AI a Human Touch

A cornerstone of customer experience is positive interactions with representatives of the firm, and AI chatbots and robots are increasingly used to engage directly with customers. AI agents and products can be designed to possess humanlike traits; for example, a chatbot can display a human avatar, a robot can have an expressive face, or a voice assistant can have a masculine voice. Many marketing managers might believe that the more humanlike an AI agent appears, the better consumers will respond. However, researchers have discovered that making an AI more humanlike can have decidedly mixed effects on customer experience.

Recent research has revealed that humanizing an AI agent can indeed harm customer experience in many situations. Garvey, Kim, and Duhachek (2023) found that AI humanization can undermine customer experience when product and service offerings fall short of expectations. In their study, when an AI representative of the rideshare firm Uber was depicted as humanlike (rather than machinelike), consumers faced with an unexpectedly overpriced offer were less satisfied, less likely to purchase, and less likely to reengage with the firm in the future. This effect was due to consumers perceiving the humanlike AI as more self-serving when offering the unexpectedly “unfair” price.

Consumers are also less likely to disclose sensitive personal information, such as medical history or embarrassing life experiences, to a humanlike (vs. machinelike) AI, according to work by T. Kim et al. (2022). The authors revealed that humanlike AI introduces worries about social judgment, but machinelike AI lessens this concern. Similarly, studies by Usman et al. (2024) revealed that when an AI representative provided flattering feedback to a consumer (e.g., “You have a wonderful personality!”), humanization led to suspicion of ulterior motives on the part of the AI that decreased purchase intentions.

Humanized AI, particularly robots, can also “creep out” consumers. In an article published in the Journal of Marketing Research (JMR), Mende et al. (2019) found that service robots with humanlike faces and anatomies can seem uncanny and threatening to consumers, harming the service experience. The authors revealed that making service robots more machinelike alleviated this discomfort.

However, making an AI more humanlike can also help firm outcomes. Studies by T. Kim et al. (2022) revealed that humanized AI is seen as more able to empathize with users and provide emotional support. Luo et al. (2019) showed that letting consumers assume an AI chatbot was a human resulted in significantly higher purchase rates than if its artificial nature was revealed, again due to higher perceived empathy on the part of the humanized AI. Similarly, Garvey, Kim, and Duhachek (2023) found that in the case of a better-than-expected price offer, humanization led to perceptions of benevolent intentions on the part of an AI agent that improved satisfaction and reengagement with the firm.

The key takeaway? In situations where consumers are dealing with unexpectedly good news from the company, seeking empathy, or requiring emotional support, humanizing an AI can foster goodwill and improve the customer experience. However, when delivering unexpectedly bad news, soliciting sensitive information, or entering an adversarial interaction, humanizing an AI can drive suspicions that deter customers from engaging.

Implementing AI Instead of Employing Humans: A Delicate Balance

When attempting to maximize customer experience, managers should be aware that consumers prefer to interact with human employees over AI in some situations but prefer AI in other situations (J.H. Kim et al. 2022). Generally, consumers shy away from using AI for tasks that involve human emotions, tastes, or social awareness. For example, in an article published in JMR, Castelo, Bos, and Lehmann (2019) showed that consumers were less comfortable relying on an AI to provide dating advice or recommend jokes compared with a human. This aversion extends to situations involving sensory experiences, such as food or travel recommendations (Longoni and Cian 2022) or highly personalized tasks, such as medical diagnoses (Castelo, Bos, and Lehmann 2019; Longoni, Bonezzi, and Morewedge 2019; Longoni and Cian 2022). AI aversion is also more likely for products that enable self-expression (Leung, Paolacci, and Puntoni 2018) and for news production (Longoni, Cian, and Kyung 2023). In addition, consumers react more positively when a decision made by a human (rather than an AI) favors the consumer (Garvey, Kim, and Duhachek 2023; Yalcin et al. 2022). For example, in a study by Yalcin et al. (2022) in JMR, consumers were happier when their application to a prestigious country club was approved by a person rather than an AI, as they assumed that a human was better equipped to understand their personal qualities and behaviors.

However, people prefer AI systems over humans under certain conditions, such as when objectivity and unbiased thought are important (e.g., scheduling events, analyzing data, giving directions; Castelo, Bos, and Lehmann 2019) and for utilitarian products such as financial advice (Longoni and Cian 2022).

Longoni and Cian (2022) also explore the case in which AI is leveraged to assist and augment human intelligence—that is, when humans and AI work together. The authors found that consumers will be more receptive to AI recommenders, even in the case of a hedonic goal (e.g., finding a tasty recipe), if the AI recommender assists and amplifies a human recommender who retains the role of ultimate decision maker. In this case, people believe that the human decision maker is able to compensate for the AI’s relative perceived incompetence in the hedonic realm. The authors found the reverse effect in the case of a utilitarian goal. In other words, people find the best recommendations to be the ones made by an AI and a human together.

What does this mean for marketing practitioners seeking to maximize customer experience? It highlights the need for a strategic blend of AI and human interaction, recognizing that while consumers prefer a human touch for emotionally charged and personal tasks, they appreciate the efficiency and objectivity of AI for analytical and utilitarian decisions. This understanding should guide how AI is implemented, ensuring that technology enhances rather than detracts from the customer experience.

Recovering from AI Mistakes After Implementation

AI systems are susceptible to errors despite technological advancements. In the last three years, hundreds of high-profile AI failures have been reported across various sectors, from industry to government, with consequences ranging from financial losses to damaged brand reputation. For example, Michigan’s flawed automated system erroneously charged tens of thousands of residents with fraud and seized millions of dollars in their wages (De la Garza 2024). Interestingly, researchers have found that consumers respond differently to AI failures compared with human errors, presenting both challenges and opportunities for improving customer experience.

In instances where AI, rather than a human, is responsible for a mistake that reflects poorly on a company’s brand, such as a product recall or a social media misstep, consumers often exhibit a more forgiving attitude. For example, Srinivasan and Sarial-Abi (2021) showed consumers a real tweet from the New York Times announcing the recall of 4.8 million Fiat Chrysler vehicles because of a defect in the cruise control. When the defect was due to an error made by an AI, evaluations of the Fiat Chrysler brand were less negative than when the error was attributed to a human. This phenomenon stems from the perception that AI possesses less agency and, therefore, bears less responsibility for adverse outcomes.

For firm representatives that provide advice and guidance, such as financial advisors or health consultants, incorrect AI advice is punished more harshly by consumers—unless the AI has the capacity to learn (Dietvorst, Simmons, and Massey 2015). This suggests that AI’s perceived competence and learning capabilities play a significant role in shaping customer perceptions.

Moreover, AI failures in public services can lead to broader distrust in AI technologies as a whole, a phenomenon termed “algorithmic transference.” Longoni, Cian, and Kyung (2023) found that algorithmic failures—in calculating benefits for low-income people or determining unemployment insurance fraud, for example—are generalized more broadly than human failures. That is, consumers tend to generalize negative experiences with one AI system to distrust AI in general, impacting the perceived legitimacy of core public institutions.

For marketers, understanding these nuances in consumer perceptions of AI errors is crucial for enhancing customer experience. Though AI can bring about a lot of benefits, there are potential unintended negative consequences if faulty AI systems are deployed prematurely. By prioritizing transparent and adaptable AI applications, particularly in areas where they excel, marketers can mitigate the risks associated with AI integration. Additionally, being mindful of sensitive contexts and proactively addressing potential failures can help maintain consumer trust and uphold brand integrity.

Key Takeaways

Designing AI Interfaces

  • AI can be strategically given humanlike or machinelike traits to enhance customer experience.
  • Humanizing AI can enhance the customer experience when positive news delivery, empathy, or emotional support are crucial.
  • However, humanlike AI may lead to suspicion and deter customers during negative interactions or when asking for sensitive information—use machinelike AI in these situations.

Using AI instead of Human Employees

  • Strategically blend AI and human employees for customer interactions.
  • Consumers prefer human interaction for highly personalized, emotionally charged, or sensory tasks.
  • AI is favored for analytical and routine decisions, especially when objectivity is paramount.

Recovering from AI Mistakes

  • Transparency and adaptability in AI applications are vital to maintaining brand integrity.
  • Consumers are generally more forgiving of AI (vs. human) errors due to perceptions of AI having less agency and, therefore, less responsibility.
  • However, incorrect AI advice may be punished more harshly by consumers.

Citation

Garvey, Aaron M. and Luca Cian (2024), “Strategies for Leveraging AI in the Customer Experience,” Impact at JMR. Available at: https://www.ama.org/marketing-news/strategies-for-leveraging-ai-in-the-customer-experience/.

References

Castelo, Noah, Maarten W. Bos, and Donald R. Lehmann (2019), “Task-Dependent Algorithm Aversion,” Journal of Marketing Research, 56 (5), 809–25.

De La Garza, Alexandro (2024), “States’ Automated Systems Are Trapping Citizens in Bureaucratic Nightmares with Their Lives on the Line,” Time (May 28), https://time.com/5840609/algorithm-unemployment.

Dietvorst, B.J., J.P. Simmons, and C. Massey (2015), “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” Journal of Experimental Psychology: General, 144 (1), 114–26.

Garvey, Aaron M., TaeWoo Kim, and Adam Duhachek (2023), “Bad News? Send an AI. Good News? Send a Human,” Journal of Marketing, 87 (1), 10–25.

Kim, Jun Hyung, Minki Kim, Do Won Kwak, and Sol Lee (2022), “Home-Tutoring Services Assisted with Technology: Investigating the Role of Artificial Intelligence Using a Randomized Field Experiment,” Journal of Marketing Research, 59 (1), 79–96.

Kim, TaeWoo, Li Jiang, Adam Duhachek, Hyejin Lee and Aaron Garvey (2022), “Do You Mind if I Ask You a Personal Question? How AI Service Agents Alter Consumer Self-Disclosure,” Journal of Service Research, 25 (4), 499–504.

Kim, TaeWoo, Umair Usman, Aaron M. Garvey and Adam Duhachek (2023), “Artificial Intelligence in Marketing and Consumer Behavior Research,” Foundations and Trends in Marketing, 18 (1), 1–93.

Leung, Eugina, Gabriele Paolacci, and Stefano Puntoni (2018), “Man Versus Machine: Resisting Automation in Identity-Based Consumer Behavior,” Journal of Marketing Research, 55 (6), 818–31.

Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge (2019), “Resistance to Medical Artificial Intelligence,” Journal of Consumer Research, 46 (4), 629–50.

Longoni, Chiara and Luca Cian (2022), “Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The ‘Word-of-Machine’ Effect,” Journal of Marketing, 86 (1), 91–108.

Longoni, Chiara, Luca Cian, and E.J. Kyung (2023), “Algorithmic Transference: People Overgeneralize Failures of AI in the Government,” Journal of Marketing Research, 60 (1), 170–88.

Luo, Xueming, Siliang Tong, Zheng Fang, and Zhe Qu (2019), “Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases,” Marketing Science, 38 (6), 937–47.

Mende, Martin, Maura L. Scott, Jenny van Doorn, Dhruv Grewal, and Ilana Shanks (2019), “Service Robots Rising: How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses,” Journal of Marketing Research, 56 (4), 535–56.

Puntoni, Stefano, Rebecca W. Reczek, Markus Giesler, and Simona Botti (2021), “Consumers and Artificial Intelligence: An Experiential Perspective,” Journal of Marketing, 85 (1), 131–51.

Srinivasan, Raji, and Gülen Sarial-Abi (2021), “When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors,” Journal of Marketing, 85 (5), 74–91.

Usman, Umair, TaeWoo Kim, Aaron M. Garvey, and Adam Duhachek (2024), “The Persuasive Power of AI Ingratiation: A Persuasion Knowledge Theory Perspective,” Journal of the Association for Consumer Research, forthcoming.

Yalcin, Gizem, Sarah Lim, Stefano Puntoni, and Stijn M.J. van Osselaer (2022), “Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans,” Journal of Marketing Research,59 (4), 696–717.

Aaron M. Garvey is Associate Professor and Bloomfield Professor of Marketing, Gatton College of Business and Economics, University of Kentucky.

Luca Cian is the Killgallon Ohio Art Associate Professor of Business Administration, Darden School of Business, University of Virginia.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.