Skip to Content Skip to Footer
Your AI’s Ethical Lapses Could Be Causing CX Disasters

Your AI's Ethical Lapses Could Be Causing CX Disasters

Sarah Steimer

robot WEB

Artificial intelligence has become a core part of the customer experience, but it can only be as well-rounded as the data it uses. Marketers can guide the ethics of AI on the back end to produce positive CX on the front end.

Artificial intelligence is presented as the opposite of natural intelligence, which is demonstrated by animals. By that definition, AI would appear to be free from the social neuroses and discriminations that can plague humans. But machine learning originates from human makers, meaning those shortcomings can be passed along via algorithms and data input.

AI is increasingly customer-facing: It includes asking Siri details about an upcoming trip or turning on Netflix and seeing recommendations based on viewing habits. AI touches numerous points along the customer journey, meaning its limitations can have organization-wide consequences.

Susan Etlinger, an industry analyst for Altimeter and author of the research report “The Customer Experience of AI,” has explored the ways different industries use AI and its effects felt by consumers. “It seems to me that ethical AI and ethical data use are part of customer experience,” she says.

Advertisement

Etlinger’s report found leaders from large companies—such as Microsoft, Adobe and IBM—to small startups are developing ethical guidelines and best practices for their AI use. Similarly, experts at AI Now released a 2017 report that provides recommendations on ethics and governance, noting that, “New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI.”

As organizations try to manage ethical concerns with AI, marketers can start by considering the ways they influence ethics.

Data Input and Discrimination

Type “gymnast” into Google’s image search, and the vast majority of the top results are female, as are the results for “nurse.” The term “parents” shows almost exclusively heterosexual couples.

The results to these searches are driven by AI, which isn’t explicitly taught to discriminate. Rather, these prejudices are the result of the data submitted to the AI algorithm. The Google image search function is far from the most egregious example of this bias. Joy Buolamwini, a researcher at the Massachusetts Institute of Technology Media Lab, found gender-recognition AIs from IBM, Microsoft and Megvii could identify a person’s gender from a photograph 99% of the time, provided the photos were of white men. The AIs misidentified the gender of as many as 35% of the “darker-skinned women” in the experiment. [Editor’s note: The terminology “darker-skinned” was used by Buolamwini in her study and refers to skin types that rank IV, V or VI on the Fitzpatrick scale, a six-point scale for classifying skin color, where I is the lightest and VI is the darkest. Skin types ranking I, II or III were classified as “lighter-skinned” in the study.] 

In another instance, a Palestinian man was arrested in Israel in 2017 after posting a photo of himself on Facebook posing near a bulldozer. The social platform’s automatic translation software interpreted his caption to say “attack them,” but it was wrong: The Arabic phrase for “good morning” and “attack them” are similar, and the software mistook the harmless post as threatening.

“One of the things that’s really endemic to AI and to machine-learning technology is that it has to learn from data, and the data we train it with comes from people,” Etlinger says. “People have biases, and the data absorbs all those biases. Some of them are things we explicitly state, and some of those are errors of omission.”

Customer relationship management systems have historically powered marketing intelligence, but Kathryn Hume, vice president of product and strategy at Integrate.ai, says these systems often capture data about fewer customers than are actually served. Because it can be difficult to engage every single customer and get their feedback, surveys and net promoter scores, the data collected often don’t provide a full picture.

“You can look at successful customers that you know a lot about and use [their behavior] to make mappings to new customers … to make guesses,” Hume says. The marketing consequences of these guesses are not often dire: The wrong marketing is presented to a potential customer. It doesn’t resonate. An opportunity is missed. But there can be more negative consequences. In the worst-case scenario, a brand might present the wrong marketing to a prospect who finds it offensive. For example, an algorithm that has been trained on a customer base of white males may incorrectly target a new customer who does not identify with either of those traits. The prospect can feel alienated and strongly put off despite the brand’s intention of a personalized offer. Hume says marketers should experiment to manage risk and gather user group feedback when testing AI algorithms.

Infographic courtesy of Susan Etlinger, from her report “The Customer Experience of AI”​

“AI lives and breathes on feedback,” Hume says. “Engagement with customers creates great training sets for the systems and participatory experiences for the customer. [Marketers] should engage the customer as much as possible and reduce the scope down to a small test bed. Learn from the small one, and then gradually expand to a larger population.”

When biases do surface, it’s also a good check on the company. If the algorithm results in AI experiences that treat different people unfairly, it may mean there’s a submarket the organization hasn’t adequately addressed.

Transparency

Some AI is so advanced that interacting with it can feel like talking with a real person. As the technology improves, there may be a case for alerting customers that they’re not talking with a real person. When Google previewed Duplex, its phone-calling AI assistant, it sounded like a human making a hair salon appointment, with verbal tics like “um” and “ah.”

“There’s an ethical issue there,” Etlinger says. Most often when consumers contact businesses and hear, “This call may be monitored for quality assurance purposes,” they have a reasonable expectation that the data of the call will be captured, she says, but not all consumers will respond to a bot the same way they would to a real person. “Some people say it doesn’t matter or shouldn’t matter, some people say it matters hugely.”

Experts and companies will need to figure out if it is possible to have an interaction with AI in a way that serves the customer and the business while disclosing the machine learning aspect. Some have recommended, in the case of Google Duplex, an introduction of AI up front, so the human can decide how they want to interact with it.

“At worst, it could be a feeling of exposure, of having something shared in a way you didn’t anticipate,” Etlinger says. “You might make more of an effort to be chatty with a real person because you’re trying to develop a relationship with them. It can feel like a waste of time if you discover you’re not talking to a person.”

Relationship-building, or understanding what the system already knows, is another big customer experience issue in AI. If you’re assisted by the same person time and again at a retailer, that associate will likely remember you and offer a certain level of empathy and shared history. With AI, it’s unclear if the machine remembers a prior interaction with a customer—and it’s unlikely to be empathetic.

Collaborate and Get Excited

Etlinger advocates for a closer relationship between data scientists and businesspeople in pursuit of better AI. “You can’t—as in the old days—give a marketer or a technologist a set of business requirements and expect them to spit out something that’s 85% ready,” she says.

The onus is not only on data scientists and programmers, who are focused on optimizing for what they’re told. The responsibility also lies with marketers to dig into the data and find relationships between different segments of people. Etlinger says there could come a time when AI erases the need for traditional demographics. There’s an opportunity to better segment around behavior and attitudinal data. 

“The truth is that we’re marching very quickly toward an algorithmic future, and it’s going to be a much more effective and efficient way of doing a lot of things—not everything, but a lot of things,” she says. “There’s no closing your eyes and hoping it’s not going to happen. It’s a real opportunity for people to start thinking about what we can make more predictable, what we can make more probabilistic and what we can just make better.” 

Sarah Steimer is a writer, editor, podcast producer, and yoga teacher living in Chicago. She has written for Marketing News, Chicago magazine, Culture magazine, the Pittsburgh Post- Gazette, and other outlets.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.