Skip to Content Skip to Footer
Journal of Public Policy & Marketing Call for Papers: Generative AI: Promises and Perils

Journal of Public Policy & Marketing Call for Papers: Generative AI: Promises and Perils

Guest Editors: Shintaro Okazaki (King’s Business School), Yuping Liu-Thompkins (Old Dominion University), Dhruv Grewal (Babson College), and Abhijit Guha (University of South Carolina)

Submission Deadline: March 15, 2024

Manuscripts are currently being solicited for an upcoming Journal of Public Policy & Marketing (JPP&M) special issue dedicated to Generative AI: Promises and Perils.

Background

Generative artificial intelligence (AI) has received much attention in recent years. Generative AI can create large amounts of text automatically and quickly in response to human provided prompts, such as texts, codes, simulations, images, and videos (Peres et al. 2023). ChatGPT is perhaps the most well-known generative language application at the moment. Launched by the software company OpenAI, this chatbot is capable of generating sophisticated text indistinguishable from that produced by a human. Just two months after launch, ChatGPT attracted 100 million users in January 2023 alone, becoming the fastest-growing consumer application in history (Eysenbach 2023). Another example of generative AI is DALL-E-2, which can create unique and high-quality images by autonomously learning from textual descriptions. Generative AI is expected to have wide impact across all marketing domains, analogous to how AI is expected to broadly impact across marketing (Davenport et al. 2020; Guha et al. 2021). The table below lists select generative AI applications that have been developed.

Selected Applications of Generative AI

YearApplication TypePlatform/SoftwareCompany
2014ChatbotAlexaAmazon
2016MusicAIVAAiva Tech
2017ChatbotLexAmazon
2018ChatbotXiaoiceMicrosoft
2020ChatbotMeenaGoogle
2020MusicJukeboxOpenAI
2021CodeCodeGPTMicrosoft
2021CodeCodexOpenAI
2021ArtCraiyonOpenAI
2022ChatbotBlenderBotMeta
2022ChatbotChatGPTOpenAI
2022CodeCodeParrotCodeParrot
2022CodeCoPilotMicrosoft
2022ArtDreamStudioStability
2022ArtImagenGoogle
2022EducationMinervaGoogle
2022AlgorithmAlphaTensorDeepMind

Source: Cao et al. (2023, p. 111:22)

Generative AI can also interact with other technologies to create new content, which may have both positive and negative consequences. For example, Moreland (2023) writes about iNFTs (intelligent NFTs), which combine NFTs with generative AI. Specifically, Moreland writes “Imagine the NFT you own is given a bunch of creative information. From there, it creates its own piece of art. Let’s say you own a character that is designed as a digital creative: It writes and composes music from samples fed to it. Then, you, your community or the world in general enjoy the show that your NFT puts on. … The art created from the NFT itself brings some very unique and interesting questions to the table regarding true creation and genuine ownership.”

The rapid diffusion of generative AI tools has attracted attention to and provoked controversy around the ethical issues surrounding their use. As one example, generative AI can introduce and spread inaccurate, misleading, or false content. ChatGPT sometimes writes “plausible sounding but incorrect or nonsensical answers” (OpenAI 2023). Such fallacy is especially dangerous for users who are looking for accurate information and guidance. In a similar vein, generative AI can pose risks to data security and privacy. In March 2023, a bug in ChatGPT’s source code led to a data breach. Some users on this chatbot saw conversation headings in the sidebar that didn’t belong to them. As another example, generative AI can produce outputs that can be discriminatory to certain minority groups. In fact, ChatGPT has been found to exhibit gender and racial biases simply because it was trained by biased data. This point should not be surprising, as generative AI is subject to the same types of bias- and ethics-related concerns as AI in general (see points made in Davenport et al. 2020). The discussions about generative AI should therefore be situated within the broader discussion about concerns about AI.

Against this background, there is an urgent call for wide-ranging debate about the ethical issues associated with generative AI (Van Dis et al. 2023). This special issue intends to take part in this debate and improve our understanding of the opportunities and limitations of generative AI, with an emphasis on marketing, public policy, and societal implications.

Topics

We welcome studies that address the promises and perils relating to the use of generative AI in marketing from multidisciplinary perspectives. This may include new developments, theories, models, methods, and frameworks. Potential research questions that may be addressed include (but are not limited to):

  • What are the major opportunities and threats of generative AI in marketing?
  • What are the opportunities and potential backlashes from AI-generated personalized ads? How can we increase consumer trust toward these ads?
  • Noting that generative AI can be used to create deepfakes, which marketing domains will be most impacted, and how should policy makers react?
  • How can generative AI influence consumer shopping behavior? What concerns does it raise?
  • How generative AI can inform consumers? What should policy makers do to protect consumers from misinformation and bias associated with generative AI?
  • What is the impact of generative AI on the compliance with major data protection laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA)?
  • How can marketers and policy makers combat against data/security/privacy breach by generative AI?
  • What is the potential impact of addiction to or excessive reliance on ChatGPT and other generative AI tools on users’ social well-being? Is there (negative) impact beyond well- being in domains like (1) problem solving ability, (2) creativity, and (3) grit?
  • What are the roles of policy makers, businesses, educators, training providers, and technology developers in educating and preventing the abusive use of generative AI? How should policy makers consider the trade-off between freedom for technological advance and experimentation versus control needed for limiting potential harm?
  • What are the legal implications of generative AI in terms of intellectual property, copyright, and patent? These points are valid not only across business domains but also in creative domains such as art, music, etc.
  • How can marketing educators preserve students’ honesty and integrity in the face of potential misuse of ChatGPT for student learning and coursework? How should marketing educators tradeoff between suitably training students to use generative AI (as employers want job candidates who already know how to use generative AI) versus ensuring that students submit responses that truly reflect their own knowledge and learning, and not responses that incorporate expert support from generative AI?

A variety of perspectives such as psychological, ethical, sociological, economic, legal, political, and critical approaches are welcome. Multidisciplinary collaboration between marketing scholars and scholars from other disciplines is especially encouraged. We are also open to a wide variety of methods, including experiments, surveys, qualitative methods, conceptual development, meta-analysis, bibliographic study, and text mining, among others.

Submission Guidelines

Submissions should follow the manuscript format guidelines for JPP&M at https://journals.sagepub.com/author-instructions/PPO. The manuscript length should not exceed 50 pages, properly formatted and inclusive of title, abstract, keywords, text, references, tables, figures, footnotes, and print appendices (web appendices do not count toward the page limit). The submission deadline is March 15, 2024.

All manuscripts should be submitted through the JPP&M online submission system at https://mc.manuscriptcentral.com/ama_jppm, from October 15, 2023 to March 15, 2024. Authors should select “Special Issue Submission” as the “Manuscript Type.” Please also note in the cover letter that the submission is for the Special Issue on Generative AI: Promises and Perils.

  • All articles will undergo double-anonymized peer review by at least two reviewers.
  • Authors will be notified the first round of decision on their manuscript no later than May 15, 2024.
  • The anticipated publication date for the special issue is 2025.
  • For additional information regarding the special issue, please contact the guest editors at jppmSIgenerativeAI@gmail.com.

References

Cao, Yihan, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip S. Yu, and Lichao Sun (2023), “A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT,” arXivpreprint (March 7), https://doi.org/10.48550/arXiv.2303.04226.

Davenport, Tom, Abhijit Guha, Dhruv Grewal, and Timna Bressgott (2020), “How Artificial Intelligence Will Change the Future of Marketing,” Journal of the Academy of Marketing Science, 48, 24–42.

Eysenbach, Gunther (2023), “The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation with ChatGPT and a Call for Papers,” JMIR Medical Education, 9 (1), e46885.

Guha, Abhijit, Dhruv Grewal, Praveen K. Kopalle, Michael Haenlein, Matthew J. Schneider, Hyunseok Jung, Rida Moustafa, Dinesh R. Hegde, and Gary Hawkins (2021), “How Artificial Intelligence Will Affect the Future of Retailing,” Journal of Retailing, 97 (1), 28–41.

Moreland, Kirsty (2023), “iNFTs: Bringing NFT Characters to Life,” Ledger (October 1), https://www.ledger.com/academy/what-are-infts.

OpenAI (2023), “Introducing ChatGPT,” (accessed June 27, 2023), https://openai.com/blog/chatgpt.

Peres, Renana, Martin Schreier, David Schweidel, and Alina Sorescu (2023), “On ChatGPT and Beyond: How Generative Artificial Intelligence May Affect Research, Teaching, and Practice,” International Journal of Research in Marketing, 40 (2), 269–75.

Van Dis, Eva A.M., Johan Bollen, Willem Zuidema, Robert van Rooij, and Claudi L. Bockting (2023), “ChatGPT: Five Priorities for Research,” Nature, 614 (7947), 224–26.