Skip to Content Skip to Footer
Journal of Marketing Policy for Research Transparency

Journal of Marketing Policy for Research Transparency

What Is the Policy?

On each round of invited revisions, authors of papers that contain numerical or computational work (e.g., empirical or experimental studies, simulations, numerical testing of algorithms or heuristics) must provide the data, programs, and any other details sufficient to permit replication of all analyses reported or referred to in the paper.

Why This Policy? Why Now?

The intent of Journal of Marketing’s Research Transparency policy is to (1) ensure the availability of the material necessary to evaluate and, as appropriate, replicate findings reported in the Journal as part of a robust review process, and (2) ensure that papers published in the Journal contribute to the development of cumulative, reliable, and applicable knowledge. Closing transparency gaps and ensuring safe data retention will bolster confidence not only in individual articles but also in the larger body of knowledge offered by the Journal.

When Will the Policy Take Effect?

Journal of Marketing’s Research Transparency policy applies to all invited revisions associated with new manuscripts submitted on or after January 1, 2023 (i.e., all manuscript IDs that begin with “JM-23” or later).

What Actions Should Authors Take Through the JM Journey?

During initial submission: When authors submit a new manuscript, they will check a box that confirms their compliance with the Journal of Marketing Policy for Research Transparency.

On each round of invited revisions: Authors will submit a replication packet to JM‘s Dataverse.[1] The replication packet should contain the data, programs, and any other details sufficient to permit replication of all analyses reported or referred to in the paper. For details on the format of the replication packet, please refer to Appendix 1. Replication Packet Submission Guide, and for submission instructions, please refer to Submitting Replication Packets for Revisions. For research that relies on proprietary data covered by a non-disclosure agreement, sensitive human-participant data, embargoed data, or unique data sets that required an extensive time or monetary investment to compile, authors will submit a packet that includes all data, stimuli, sample, and code corresponding to the Alternative Disclosure Plan (please refer to Appendix 2. Alternative Disclosure Plan for more information).

For each round of submissions: Authors will be asked to confirm that if any content is AI-generated, it is clearly identified within the text and acknowledged within your Acknowledgments section. Please note that AI bots such as ChatGPT should not be listed as an author. For more details on this policy, please visit this page.

Who Can Access the Replication Packet?

During the review process, the replication packet can be accessed by the processing Editor and Associate Editor. Reviewers will not be able to access the replication packet. We encourage authors to make their replication packet public after acceptance, in which case authors can make all or some of the materials available on JM’s Dataverse upon publication of the article.

Authors may choose to include an anonymized link to their replication packet, data, or materials in their submitted manuscript. If included in the submitted manuscript, the reviewers will have access to the link.

How Did the Policy Come Together?

To develop this policy, the Journal of Marketing relied extensively on existing policies for data and/or code sharing. We particularly want to acknowledge that we have modeled our policy heavily after the Data Availability Policy of Management Science, which in turn based their policy on that of the American Economic Association, the Journal of Finance Code Sharing Policy, and the Marketing Science Replication and Disclosure Policy.

In preparing this policy, the Journal of Marketing obtained guidance and input from Rajdeep Grewal, Ronald Hill, Ashlee Humphreys, John Lynch, Christine Moorman, Page Moreau, Scott Neslin, Stijn van Osselaer, Robert Palmatier, S. Sriram, Roland Rust, Alina Sorescu, Marilyn Stone, and Matt Weingarden. The policy was approved by the VP of Publications of the American Marketing Association in July 2022 and was shared with the Journal of Marketing Advisory Board as well as the Editor in Chief and Marketing Department Editors of Management Science.

[1]If your data or code files are already publicly available on another trusted repository (Harvard Dataverse, OSF, Zenodo, Figshare) with settings such that (1) the materials are accessible to the paper’s processing Editor and (2) the data are retained for five years, you do not have to reupload all the files to JM’s Dataverse (see Requirements for Materials Hosted on Another Repository).


Appendix 1. Replication Packet Submission Guide

Section A. Experiments, Field Studies, and Original Surveys.

Information on DesignAuthors should provide the original instructions and stimuli. For surveys, authors should provide the original survey items used.

These should be summarized as part of the discussion of experimental design in the submitted manuscript (and also provided in full in the submitted Web Appendix). The instructions should be presented in such a way that, together with the design summary, they convey the protocol clearly enough that the design could be replicated by a reasonably skilled experimentalist.

Authors should provide explanations of sample size determination, information regarding participant eligibility or selection (such as exclusions based on past participation in experiments, college major, etc.), all manipulations, and all measures collected. This should be summarized as part of the discussion of experimental design and analysis in the submitted manuscript. If exclusions are preregistered, authors should include an anonymized link to the preregistration source.
Raw Data FileAuthors should provide the raw data file from all experiments reported in the paper. These should be summarized as appropriate in the submitted manuscript and provided in full as a QSF, Excel, ASCII, or text file prior to publication, with sufficient explanation to make it possible to use standard analysis programs to replicate the data analysis.
Scripts or CodeAuthors should provide any scripts or codes used to analyze the data. These should be summarized as appropriate in the submitted manuscript and provided in full. If no scripts or code were used, the authors should highlight the specific analysis tools used in the software so that an informed reader can replicate the analysis. The authors are not required to provide additional assistance to persons working with the replication materials so long as the above requirements are satisfied.  
Institutional Review BoardAuthors should provide institutional review board or institutional ethics committee information as appropriate given policies in place in their home location.
Preregistration as ApplicableAuthors who preregister studies should provide anonymized links in the body of the paper. In the submission process, authors will be asked to attest that they have faithfully represented the preregistration process in the manuscript.

Section B. Archival/Secondary Data

Information on DesignThe institutional setting and the informational items obtained from the archival data should be summarized as part of the discussion of methods section of submitted manuscript (and also provided in full in the submitted Web Appendix).
Raw Data FileWhen the research relies on licensed data from sources such as the Census Bureau, Compustat, CRSP, FactSet, and WRDS, the authors should provide detailed instructions along with their own code for accessing and linking to the licensed data, sufficient for replication by others. The authors must provide a description of how previous intermediate data sets and programs were employed to create the final data set(s), if relevant.[2]

For data collated from various freely available sources, the instructions should be presented in such a way that, together with the design summary and the sources, they convey the protocol clearly enough that the design could be replicated by a reasonably skilled empiricist. The authors should provide the collated raw data file used for the analysis.
Scripts or CodeAuthors should provide any scripts or codes used to analyze the data. The authors should include sufficient details about the software packages, programming languages, and data formats to enable users to run the programs. The code should be suitably commented  so that it can be understood by a reasonably adept user.[3]

When the research relies on licensed code, the authors should provide detailed instructions along with their own code for accessing and linking to the licensed code, sufficient for replication by others. As needed, the authors should provide either the set of test problems or a detailed description of how the test problems were generated, sufficient for replication.

If no scripts or code were used, the authors should highlight the specific analysis tools used in the software so that an informed reader can replicate the analysis.

The authors are not required to provide additional assistance to persons working with the replication materials so long as the above requirements are satisfied.

Section C. Qualitative Data (e.g., Interviews, Participant Observation, Cultural or Historical Material)

Information on DesignThe institutional setting and the informational items obtained from the qualitative data should be summarized as part of the discussion of methods section of the submitted manuscript (and also provided in full in the Web Appendix in the submitted document).

The authors should include the following:
• All material consulted, such as a table of interview participants (with pseudonyms), places where observation took place, or a list of all articles or documents considered in interpretation.
• An interview guide or other data collection protocol, if applicable.
• A full description of how participants or documents were selected and how data were collected.
    Data Analysis and Coding ProceduresThe authors should provide a table with examples from the data of key themes or categories from the findings, augmented with at least one transcribed interview annotated to illustrate the process by which themes or categories were identified.

    [2]This is taken almost word for word from the AEA’s policy.

    [3]Taken from Journal of Finance Code Sharing policy.


    Appendix 2. Alternative Disclosure Plans

    When the research relies on proprietary data covered by a non-disclosure agreement, sensitive human-participant data, embargoed data, or unique data sets that required an extensive time or monetary investment to compile, the authors should propose an alternative disclosure plan that is in keeping with the spirit of replicability while respecting the specific situation faced by the authors. For instance, the authors might:[4]

    1. Disguise the data in such a way that protects sensitive information yet allows for replication of the main results. For instance, add noise or apply multipliers to the variables. See Acimovic et al. (2019) for an example in which SKU weekly demand is normalized such that total demand during the life cycle of a product is equal to 1; quintile bucket info is provided for each SKU to indicate fast- and slow-selling products. When normalizing the data, the authors provided limited precision (i.e., limited decimal places) so that the original demand values cannot be reconstructed.
    2. Provide all necessary statistics to populate the model so that others can replicate the study. See Shi et al. (2016) for an example in which the authors could not make the original data set public due to a non-disclosure agreement with the collaborating hospital. Instead, they provided in the paper all necessary statistics to populate their model (including both summary statistics and distributional statistics). For instance, see Figures 3 and 7 and Tables 1 and 3 for the daily/hourly patient arrival rates, the number of beds in each ward, and the distribution of patient length-of-stay.
    3. Post a randomly drawn subset of the paper’s data set that could be used to replicate the results, albeit with the expectation of larger standard errors.
    4. Generate and post a synthetic data set that is representative of the actual data, at least for the purposes of replication. In this case, the authors need to provide some evidence that the synthetic data is a valid surrogate for the actual data. If the authors propose to share a transformed data set, the authors should disclose to the Editor the details of the process or method for creating this transformed data set.
    5. Suggest a delay in the sharing of data or codes, so as to have more time to harvest their investment from building the database or algorithm. As a general guideline, a delay from publication of one year for code and two years for data would seem an acceptable balance of the competing interests of the authors and the research community.

    Nevertheless, in some cases, none of the above options may be workable. For instance, in health care research, the sharing of patient-level data in any form may not be possible, and creating a synthetic database may not be meaningful or may be an extraordinary burden on the authors. In these cases, the authors should provide sufficient details about the data set so that other researchers could readily generate their own data set comparable to that used in the research. This would include a data dictionary that contains a description of all variables used in the paper, so that other researchers could reconstruct these variables from their own data. See Gallino and Moreno (2014), in which the authors provide guidelines to help others replicate their analysis.

    For authors who choose to follow an alternative disclosure plan, the published paper will note that an alternative disclosure plan has been approved for the paper, in keeping with the spirit of the policy.[5]

    Whether the authors’ proposed disclosure plan is acceptable remains at the discretion of the Editor team. When considering a proposed plan, the Editors will carefully weigh the pros and cons of processing a paper with potentially important or impactful research contributions that might not be readily reproducible. This consideration may well entail a trade-off between the benefits of enforcing the data disclosure policy versus publishing an important paper. Authors who have questions regarding the appropriate application of the policy for their work or who believe their work may warrant special consideration may contact the Editor in Chief prior to submission with specific queries related to data transparency.

    References

    Acimovic, Jason, Francisco Erize, Kejia Hu, Douglas J. Thomas, and Jan A. Van Meighem (2019), “Product Life Cycle Data Set: Raw and Cleaned Data of Weekly Orders for Personal Computers,” Manufacturing and Service Operations Management, 21 (1), 171–76.

    Gallino, Santiago and Antonio Moreno (2014), “Integration of Online and Offline Channels in Retail: The Impact of Sharing Reliable Inventory Availability Information,” Management Science, 60 (6), 1434–51.

    Shi, Pengyi, Mabel C. Chou, J.G. Dai, Ding Ding, and Joe Sim (2016), “Models and Insights for Hospital Inpatient Operations: Time-Dependent ED Boarding Time,” Management Science, 62 (1), 1–28.

    [4]Some of these options are taken as is from the Management Science Replication and Disclosure Policy. More explanatory details can be found there for each option.

    [5]This is adapted from Journal of Finance Code Sharing policy.