Special Issue Editors: Shintaro Okazaki, Yuping Liu-Thompkins, Dhruv Grewal, and Abhijit Guha
Given the growing use and implications of generative AI (GenAI), this special issue seeks to offer new, pertinent insights related to how individuals and firms can and should address it, as well as which types of policies and regulations are necessary to ensure its promise is not overcome by its perils. This special issue brings together nine articles that collectively examine the multifaceted (potential) effects of GenAI on marketing practices and its associated public policy implications. Read the editorial here.
Articles in the Special Issue Include:
“Generative AI in Marketing: Promises, Perils, and Public Policy Implications,” by V. Kumar, Philip Kotler, Shaphali Gupta, and Bharath Rajan
By evaluating the pattern of generative AI (GAI) use by businesses in marketing, this study aims to understand the subsequent impact on society and develop policy implications that promote its beneficial use. To this end, the authors develop an organizing framework that contends that the usage of GAI models by businesses for marketing purposes creates promises and perils for society through a specific business process. This business process is represented by the action → capabilities → transformation → impact link in the proposed framework. Additionally, the authors find that the level of technology infrastructure, skilled personnel, and data access moderates the influence of GAI on businesses’ ability to develop technology-driven capabilities. Furthermore, adaptive leadership and management strategies moderate the impact of these capabilities on technology-enabled business transformations. This research is the first study to critically evaluate the use of GAI in marketing from a public policy perspective. The study concludes with an agenda for future research.
“Generative AI in Marketing and Principles for Ethical Design and Deployment,” by Erik Hermann and Stefano Puntoni
Generative AI (GenAI) is breaking new ground in emulating human capabilities, and content generation may only be the beginning. In this work, the authors systematize and illustrate promising areas of application of GenAI in marketing. They lay out a conceptual framework along two dimensions: (1) GenAI impact (i.e., human enhancement, human replacement) and (2) the marketing cycle stage (i.e., marketing research, marketing strategy formulation, marketing actions related to the marketing mix instruments). Based on the AI ethics literature, the authors then introduce a set of principles (i.e., ASSURANCE: Autonomy, Security, SUstainability, Representativeness, Accountability, Nonbiasedness and nondiscrimination, Crediting, Empowerment) to enable marketers to address the risks and challenges of GenAI and thereby achieve beneficial outcomes for companies, consumers, and society at large. Finally, they delineate the public policy implications for each principle and illustrate avenues for future research.
“The Human Superiority Effect in Advice Taking: A Multimethod Exploration and Implications for Policy Makers and Governmental Organizations,” by Manhui Jin, Zhiyong Yang, Traci L. Freling, and Narayanan Janakiraman
Many policy makers and governmental organizations have started using generative artificial intelligence (AI) to provide advice to individuals. However, prior research paints an unclear picture of individuals’ receptiveness to the outputs generated by AI, relative to those from human advisers. While some studies show that individuals prefer outputs generated by humans over AI, others present an opposite pattern. To reconcile these mixed findings, this research differentiates two perspectives where relative preferences have been widely examined: (1) a bystander perspective, where consumers evaluate the content generated by human versus AI agents, and (2) a decision-maker perspective, where consumers accept recommendations made by the agents. The authors find that although there is a general trend of preferring human advice over AI advice in individual decision-making—exhibiting a “human superiority effect”—there is no significant difference between human and AI content preferences during bystander evaluations. Additionally, psychological distance constitutes an important contextual moderator explaining the relative preference for human versus AI recommendations. Specifically, when decision-making circumstances are perceived to be psychologically distant (e.g., low personal relevance), the human superiority effect is attenuated. Theoretical contributions are discussed, along with practical implications for businesses and governmental organizations.
“From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models,” by Wolfgang Messner, Tatum Greene, and Josephine Matalone
Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human–technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. In this study, the authors explore the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE (Global Leadership and Organizational Behavior Effectiveness) project. The findings reveal that LLMs’ cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by economic competitiveness. It is crucial for all members of society to understand how LLMs function and to recognize their potential biases. If left unchecked, the “black-box” nature of AI could reinforce human biases, leading to the inadvertent creation and training of even more biased models.
Read More
-
The conversation
Is AI Sparking a Cognitive Revolution That Will Lead to Mediocrity and Conformity?
“Experiential Narratives in Marketing: A Comparison of Generative AI and Human Content,” by Yingting Wen and Sandra Laporte
As generative AI technologies advance, understanding their capability to emulate human-like experiences in marketing communication becomes crucial. This research examines whether generative AI can create experiential narratives that resonate with humans in terms of embodied cognition, affect, and lexical diversity. An automated text analysis reveals that while reviews generated by ChatGPT 3.5 exhibit lower levels of embodied cognition and lexical diversity compared with reviews by human experts, they display more positive affect (Study 1a). However, human raters struggle to notice these differences, rating half of the selected reviews from AI higher in embodied cognition and usefulness (Study 1b). Instances of hallucination in AI-generated content were detected by human raters. For social media posts, the more sophisticated ChatGPT 4 model demonstrates superior perceived lexical diversity and leads to higher purchase intentions in unbranded content compared with human copywriters (Study 2). This research evaluates the performance of large language models in generating experiential marketing narratives. The comparative studies reveal the models’ strengths in presenting positive emotions and influencing purchase intent while identifying limitations in embodied cognition and lexical diversity compared with human-authored content. The findings have implications for marketers and policy makers in understanding generative AI’s potential and risks in marketing.
Read More
-
Research Insight
Where GenAI Succeeds—and Fails—in Creating Experiential Marketing Narratives
“Generative AI Solutions to Empower Financial Firms,” by Shashank Shaurya Dubey, Vivek Astvansh, and Praveen K. Kopalle
The advent of generative AI (GenAI) has caused consternation across the industrial landscape. The financial industry is no exception. The scramble to find GenAI solutions in the financial industry has led to a proliferation in the academic and practitioner literature on the subject. However, the field of knowledge remains scattered. The authors offer four deliverables. First, using a survey of the literature and interviews of managers in financial firms, they create a funnel-shaped, two-stage framework of how GenAI can empower financial businesses. The top stage comprises seven GenAI value propositions for financial firms, condensed into the EMPOWER acronym. The bottom stage includes three functions for each proposition. Second, the authors propose ten novel GenAI-based applications spanning the five verticals of financial services, thus extending the current industrial focus of GenAI applications. Third, they outline the benefits and risks of these GenAI applications, visualizing them in a benefit–risk matrix to assist financial managers in prioritizing these applications. Fourth, they propose research questions to guide academic research and policy making at the intersection of GenAI and finance.
“AI-Based Financial Advice: An Ethical Discourse on AI-Based Financial Advice and Ethical Reflection Framework,” by Lisa Brüggen, Robert Gianni, Floris de Haan, Jens Hogreve, Darian Meacham, Thomas Post, and Minou van der Werf
This article presents a first step in identifying the ethical issues of AI-based financial advice. Consumers must navigate an ever more complex array of financial decisions. (Generative) AI-based financial advice may increase access to and acceptance of financial advice and strengthen consumers’ financial well-being. However, significant ethical challenges exist in designing, developing, and deploying AI-based financial advice. To analyze the perils and pitfalls of AI-based financial advice, the authors develop a definition of what constitutes good AI-based financial advice and provide a first assessment of ethical challenges related to AI-based financial advice. The iterative multistakeholder approach, including workshops and semistructured interviews with consumers and experts, results in an ethical discourse structured around the four fundamental values of the European Commission’s Ethics Guidelines for Trustworthy AI—human autonomy, explicability, fairness, and prevention of harm—and trust as the overall objectives. Based on the analyses, the authors derive a simple yet comprehensive AI Ethics Framework for Financial Advice. This reflection framework guides public policy makers, managers of financial service providers, and technology developers in incorporating ethical discourse in developing and deploying (generative) AI-based financial advice.
“Empowering Consumers with Disabilities Through Generative AI Cocreation of Servicescape Information,” by Meike Eilert and Stefanie Robinson
Generative artificial intelligence (GenAI) has sparked a lot of innovation in the servicescape to improve consumer experiences, primarily due to its ability to interact with consumers and personalize information based on the consumer’s input. The authors develop a framework grounded in the social model of disability to propose how GenAI can be a tool to cocreate otherwise disabling servicescape information design. Consumers with disabilities can use this technology to modify, transform, prioritize, and generate servicescape information to fit their individual accessibility needs and mitigate disabling servicescape conditions, resulting in more positive servicescape experiences, better access, and inclusion. Institutions such as industry, government, and higher education play a dual role in this framework. While these institutions are responsible for creating servicescapes with disabling information design, they are also key collaborators that support consumers with disabilities in cocreating GenAI solutions and ensuring their effective and safe use. This framework has important implications for the universal design of servicescapes and technologies supporting consumers with disabilities, as well as the various institutions that can collaborate to facilitate inclusive and safe technology-enabled, smart environments.
“When AI Wears Many Hats: The Role of Generative Artificial Intelligence in Marketing Education,” by Unnati Narang, Vishal Sachdev, and Ruichun Liu
Generative artificial intelligence (GAI) is increasingly being integrated into marketing education and is reshaping the skill sets required in marketing careers. While research has highlighted the promise and perils of incorporating GAI into education, there remains a need for a comprehensive framework to guide its effective use. In this research, the authors conduct a multipronged analysis, including a review of marketing course syllabi, a survey of marketing educators, and follow-up qualitative interviews. Building on role theory and the community of inquiry model, they propose that GAI can assume three roles in marketing education: tutor, teammate, and tool. Each role influences teaching, social, and cognitive presence differently, shaping the learning experience and preparing workplace-ready marketing graduates. For instance, as a tutor, GAI can aid students in grasping theoretical concepts, while as a teammate, it can foster collaboration by supporting brainstorming and problem-solving activities. However, ethical considerations such as data privacy, plagiarism, dependency on AI, and fairness in assessment must be addressed to ensure its responsible adoption in marketing education. The authors provide concrete examples for GAI’s careful integration in marketing courses and discuss its implications for marketing educators, learners, and policy makers.