Skip to Content Skip to Footer
When Algorithms Go Bad: How Consumers Respond

When Algorithms Go Bad: How Consumers Respond

Raji Srinivasan and Gulen Sarial-Abi

chat bot

Marketers increasingly rely on algorithms to make important decisions. A perfect example is the Facebook News Feed. You do not know why some of your posts show up on some people’s News Feeds or not, but Facebook does. Or how about Amazon recommending books and products for you? All of these are driven by algorithms. But algorithms are not perfect. They can fail, and some do fail spectacularly. For example, Facebook allowed advertisers to target offensive categories of people, like “Jew haters.” The site paid $30 for an ad that targeted an audience that would respond positively to things like “why Jews ruin the world” and “Hitler did nothing wrong.” In a Facebook post, COO Sheryl Sandberg said she was “disgusted and disappointed [by] those words” and announced changes to its ad tools. A couple of years ago, chat bots were supposed to take the world by storm, replacing customer service reps and making the online world a chatty place to get info. In March 2016, Microsoft pushed out an algorithm chatbot named Tay that people, specifically 18- to 24-year-olds, could interact with on Twitter. Tay in turn would make public tweets for the masses. But in less than 24 hours, learning from foul-mouthed young users, Tay became a full-blown racist and Microsoft had pulled down Tay instantly; 

Add in the glare of social media and a small glitch can quickly turn into a PR nightmare. Yet, we know little about consumers’ responses to brands following such brand harm crises. 
 
A new study in the Journal of Marketing offers actionable guidance to managers on the deployment of algorithms in marketing contexts. First, our research team finds that consumers penalize brands less when an algorithm (vs. human) causes an error that causes a brand harm crisis. In addition, consumers’ perceptions of the algorithm’s lower agency for the error and resultant lower responsibility for the harm caused mediate their responses to a brand following such a crisis.

Second, when the algorithm appears to be more human— when it is anthropomorphized (vs. not) or machine learning (vs. not), it is used in a subjective (vs. objective) task, or an interactive (vs. non-interactive) task—consumers’ responses to the brand are more negative following a brand harm crisis caused by an algorithm error.  Marketers must be aware that in contexts where the algorithm appears to be more human, it would be wise to have heightened vigilance in the deployment and monitoring of algorithms and provides resources for managing the aftermath of brand harm crises caused by algorithm errors. 

Our study also generates insights about how to manage the aftermath of brand harm crises caused by algorithm errors. Managers can highlight the role of the algorithm and the lack of agency of the algorithm for the error, which may attenuate consumers’ negative responses to the brand. However, highlighting the role of the algorithm will worsen the situation by strengthening consumers’ negative responses for an anthropomorphized algorithm, a machine learning algorithm, or if the algorithm error occurs in a subjective or in an interactive task. 

Finally, insights from an intervention study indicate that marketers should not publicize human supervision of algorithms (which may actually be effective in fixing the algorithm) in communications with customers following brand harm crises caused by algorithm errors. However, they should publicize the technological supervision of the algorithm when they use it. The reason? Consumers are less negative when there is technological supervision of the algorithm following a brand harm crisis.
 
Overall, our findings suggest that people are more forgiving of algorithms used in algorithmic marketing when they fail than they are of humans. We see this as a silver lining to the growing usage of algorithm usage and their failures in practice. 

Advertisement

Read the full article

Read the authors’ slides for sharing this material in your classroom.

From: Raji Srinivasan and Gulen Sarial-Abi, “When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors,” Journal of Marketing.

Go to the Journal of Marketing

Raji Srinivasan is Sam Barshop Professor of Marketing Administration, University of Texas at Austin, USA.

Gülen Sarial-Abi is Associate Professor of Marketing, Copenhagen Business School, Denmark.