Research Insight | Patients Trust AI More When the News Is Bad
Artificial intelligence has the potential to transform healthcare—speeding diagnoses, improving access, and easing pressure on overstretched systems. Yet despite these benefits, many consumers remain uneasy about letting an algorithm assess their health. This research sheds light on why that skepticism persists and when patients are most (and least) willing to trust AI-driven medical advice.
Across five studies, the researchers found a surprising pattern: people are less willing to follow AI recommendations when the diagnosis is good news (e.g., “your symptoms don’t require medical care”). When AI tells them something is wrong, they’re more inclined to listen. This is because good-news assessments from AI feel less trustworthy, especially among consumers with high health anxiety.
Anxiety plays a powerful role: people who fear getting seriously ill place significantly less trust in AI-based diagnoses, even when those diagnoses indicate the person fine. But the study also reveals a solution. Social proof—such as testimonials or data showing many satisfied users—helps anxious consumers feel more comfortable relying on AI and increases their willingness to follow its recommendations.
For healthcare managers and marketers, AI acceptance isn’t just about accuracy—it’s about psychology. With the right messaging, design choices, and patient segmentation, AI can become a trusted partner in care.
For more Research Insights, click here.
What You Need to Know
- Consumers are less willing to follow medical recommendations from an AI (vs. human) when the medical diagnosis is good (i.e., “your symptoms don’t require medical care”) vs. bad (i.e., “your symptoms are worrisome, and you may require urgent care”).
- Healthcare providers could prioritize the use of AI-based recommendations for patients with no previous history of health anxiety problems.
- In healthcare marketing, social proof (e.g., the number of satisfied customers recommending the AI service) can be a highly effective tool to ease the minds of worried consumers and build trust in AI-based recommendations.
Abstract
Artificial intelligence (AI) in medicine offers a unique opportunity to improve the global health system. However, consumers remain skeptical about AI’s ability to accurately assess their medical condition. The five studies here provide insights into consumers’ reluctance to use AI-produced health care recommendations. Consumers are less willing to follow a medical recommendation from AI (vs. from a human) when the medical diagnosis provides health results that are good (i.e., symptoms do not require medical care) versus bad (i.e., symptoms are worrisome and may require urgent care) (Study 1a). The effect is mediated by consumers’ perception of diagnosis trustworthiness (Study 1b) and enhanced by consumers’ health anxiety score (Study 2). Providing social proof (e.g., number of satisfied customers recommending the service) reduces the negative effect of health anxiety on consumers’ trust in the medical diagnosis and increases their willingness to follow the AI’s recommendations (Study 3a). The findings provide insights into the psychological drivers of acceptance of automated health care and suggest possible actions to overcome consumers’ reluctance to follow AI medical recommendations.
Piotr Gaczek, Rumen Pozharliev, Grzegorz Leszczyński, and Marek Zieliński, “Overcoming Consumer Resistance to AI in General Health Care,” Journal of Interactive Marketing, 58 (2/3), 321–38. doi:10.1177/10949968221151061.