Skip to Content Skip to Footer

Research Insight | When Consumers Prefer Human Advice over AI—And What Policymakers Should Know

As policy makers and governmental organizations increasingly rely on generative AI to offer advice across domains, a key question arises: How do people perceive AI-generated versus human-generated recommendations? While some studies suggest a strong preference for human advice, others find that AI advice is equally acceptable to consumers. This meta-analysis clarifies the issue by distinguishing two perspectives: (1) the bystander perspective, in which individuals evaluate advice without acting on it, and (2) the decision-maker perspective, in which individuals must decide whether to follow the advice.

Results show a strong “human superiority effect” in decision-making contexts, meaning that people generally prefer human advice when it requires action. This preference fades in bystander settings, where AI advice is viewed as equally valid. Psychological distance also plays a key role: In distant or low-stakes contexts, people are more receptive to AI advice. For example, individuals are more likely to adopt health plans when recommended by humans due to perceived empathy and trust; however, they rate AI-generated plans just as highly when not required to act.

Based on these findings, organizations should use human advisors in high-stakes, emotionally sensitive areas (like healthcare) and AI for low-stakes or scalable tasks. This strategic balance boosts efficiency, trust, and satisfaction while informing ethical AI policy.

For more Research Insights, click here.

What You Need to Know

  • Firms should utilize AI for decisions involving greater psychological distance, such as future planning or routine tasks, while reserving human advisors for emotionally charged or self-relevant scenarios to enhance consumer trust and acceptance.
  • Developing AI with humanlike qualities such as empathetic language and natural voices can foster trust and engagement, particularly in emotionally intensive contexts like healthcare or personal counseling.
  • Organizations should combine the computational efficiency of AI with the empathy and adaptability of human advisors in complex or sensitive decision-making contexts to balance practicality and consumer preferences.
 

Abstract

Many policy makers and governmental organizations have started using generative artificial intelligence (AI) to provide advice to individuals. However, prior research paints an unclear picture of individuals’ receptiveness to the outputs generated by AI, relative to those from human advisers. While some studies show that individuals prefer outputs generated by humans over AI, others present an opposite pattern. To reconcile these mixed findings, this research differentiates two perspectives where relative preferences have been widely examined: (1) a bystander perspective, where consumers evaluate the content generated by human versus AI agents, and (2) a decision-maker perspective, where consumers accept recommendations made by the agents. The authors find that although there is a general trend of preferring human advice over AI advice in individual decision-making—exhibiting a “human superiority effect”—there is no significant difference between human and AI content preferences during bystander evaluations. Additionally, psychological distance constitutes an important contextual moderator explaining the relative preference for human versus AI recommendations. Specifically, when decision-making circumstances are perceived to be psychologically distant (e.g., low personal relevance), the human superiority effect is attenuated. Theoretical contributions are discussed, along with practical implications for businesses and governmental organizations.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.