Skip to Content Skip to Footer
Two Steps: A Primer on B2B Experiments

Two Steps: A Primer on B2B Experiments

two people looking at a computer screen

By Mahima Hada

The business-to-business economy is almost twice as large as the business-to-consumer economy. Marketing scholars have studied the B2B sector for decades, investigating the impact of firms’ strategic decisions on critical outcomes. For example, Lawrence et al (2019) show investments in online sales channels increase salesperson productivity, and in turn, company profits.

 

Download Article as PDF

 

While researchers can use many methods to study a strategic decision’s effect on an outcome, experiments are the gold standard for establishing causality. In B2B domains, however, we cannot randomly assign firms or buyers to experimental conditions with ease. Instead, we must randomly assign B2B managers to experimental conditions and study each context’s effect on managerial decision making and/or downstream outcomes.

Understanding B2B Experiments

When can we use experiments? Whenever we consider using a survey.

In a survey of 511 industrial buyers, for example, Palmatier et al (2007) studied how customer loyalty to salespeople improved vendor profits but increased defections – when such salespeople left their company, they took their customers with them. The researchers showed the vendors could reduce the defection risk by giving their customers special treatment or status.

Advertisement

An experiment could augment Palmatier and colleagues’ findings by investigating why the customers left their suppliers. Was it because of loyalty to salespeople and indifference about the firms’ and competitors’ products? Or was it because they feared new salespeople would not understand their needs? Experiments can establish such causal relationships.

To be effective, experiments must capture context richness and complexity. Researchers can find it difficult to incorporate B2B contexts in the experimental stimuli due to the multiple actors involved, purchasing process timelines, and long post-purchase assessments. They must carefully select B2B experiment participants to ensure they have experience making complex decisions in contexts varying from selecting software as a service provider to recruiting franchisees.

But the complexity of B2B experiments is not necessarily a bug or obstacle. It is an opportunity to enrich insights—if researchers remember a two-step process.

Step One: Match Study Design to Purpose

B2B experimental research predominantly uses two designs: between-subjects and conjoint. A between-subjects design involves A/B testing with participants randomly assigned to two or more groups and exposed to different experimental conditions.

Investments in online sales channels increase salesperson productivity, and in turn, company profits.

For example, Jap et al (2013) wanted to explore how B2B suppliers engaged in low-stakes opportunism, such as mildly overstating costs and concealing information. The researchers created two groups, each with 93 randomly assigned executives. Half the executives were in a “low-stakes with low-rapport” condition; the other half experienced a “low-stakes with high rapport” condition. The researchers found suppliers were actually more likely to indulge in small forms of opportunism when they had a good rapport with their customers.

Conjoint studies use a fractional-factorial design, which allows researchers to simultaneously manipulate many variables for each experiment participant. Conditions are randomized for each individual (i.e., within-subjects), and each individual provides multiple responses which increases the size of the final dataset. Stremersch et al (2003), for example, wanted to understand what drives firms to outsource systems integration. To measure the many factors affecting the decision, the researchers used a conjoint experiment among actual and prospective telecommunications system buyers. They studied seven variables with 55 managers. Each manager saw different variable level combinations and decided to outsource system integration or bring it in-house: providing valuable insight into managers’ preferences in outsourcing IT activities.

Experiments can provide additional insights by incorporating blocking factors, similar to externally determined segmentation variables. For example, researchers may restrict conditions based on firm type (e.g., public versus private, small versus large) or purchasing situation (e.g., new versus rebuy) to study differences among company profiles, customer types, product lines, geographic regions, or industries. A 1994 paper by Kuhfeld, Tobias, and Garratt (1994) is a starting point for understanding the strategy.

Step Two: Match Context to Respondent

B2B experimenters can use scenario-based role-playing vignettes to simulate realistic decision making (e.g., Hada, Grewal, and Lilien 2014). To develop a realistic scenario, researchers must carefully describe industry setting (e.g., health insurance versus software), the customer interaction process (e.g., online versus in person), and participant titles (e.g., salespeople versus key account managers), among other factors. They must construct scenarios so participants believe they are real and create stimuli and respondent samples synergistically.

Finding managers to participate in B2B studies is expensive. Firms or industry associations (e.g., the Institute for the Study of Business Markets) may allow researchers to survey their members. Other firms, like Dynata and Qualtrics, provide dedicated respondent panels at a cost of  $21-$50 per completed response. Before contracting with the panel provider, researchers must ensure they can describe their panels in terms of size, industry, manager titles, etc. Understanding a panel’s composition allows researchers to select sub-samples and create appropriate stimuli.

Researchers can also find experiment participants among evening MBA or executive MBA students, with work experience being a critical factor. Jap and associates accessed executive MBA students for their 2013 experiment, while Stremersch and colleagues mailed their stimuli to managers from a telecommunications license database.

How B2B Experiments Benefit Theory and Practice

Every B2B experiment should pass a crucial litmus test: Does it improve our understanding of an issue beyond merely establishing causality?

Experiments assess the combined effect of multiple variables, which may not co-occur frequently in the real world. Consider the effect of economic shocks like COVID-19 on firms’ franchising capabilities. Researchers can use experiments to examine franchisees’ reactions to franchisors’ contract changes with or without the shock’s presence.

Experiments allow researchers to measure the underlying process by which causal factors affect an outcome, indicating the thought process of suppliers, customers, investors, and other stakeholders. In their 2013 experiment, Jap and colleagues showed why suppliers behaved opportunistically. One participant said, “This is part of the game, and these other sales reps will do the same to me.”

Experiments obviate the problem of “searching under the lamp post,” which often occurs when researchers rely exclusively on pre-existing or secondary data. Conjoint studies can assess multiple factors researchers cannot study with secondary data. For example, a firm’s CRM data might provide information on an outsourcing decision, but the raw numbers would not allow analysts to assess the impact of internal company knowledge on the decision.

B2B firms often carefully select existing customers to influence potential customers. For example, software provider SAS Inc. reported asking reference customers to tell potential customers “the good, the bad, and the ugly” about the firm. SAS Inc.’s intent was to improve its reference customers’ credibility. But what if the firm wanted to determine whether the strategy was effective? SAS could run an experiment using a panel of supplier-facing managers. Hada, Grewal, and Lilien used the strategy in their 2014 experiment. They found negative information indeed made referrals seem more credible, but the overall effect was that potential customers fixated on the negatives.

Summary

Experimental designs like A/B testing and conjoint studies are invaluable tools for answering questions about corporate strategy, decision making, sales processes, key account management, cross-selling, and other topics of interest to B2B companies. Practitioners and scholars can deploy experiments to gain insights complementing secondary data and providing a holistic picture.


Author Bio

Mahima Hada is Associate Professor of Marketing and Director of Marketing Analytics Programs at Baruch College, City University of New York.

Citation

Hada, Mahima (2021), “Two Steps: A Primer on B2B Experiments,” Impact at JMR, (January), Available at: https://www.ama.org/2021/01/26/two-steps-a-primer-on-b2b-experiments/

References

Hada, Mahima, Rajdeep Grewal, and Gary L. Lilien (2014), “Supplier-Selected Referrals,” Journal of Marketing 78(2): 34-51. https://doi.org/10.1509/jm.11.0173

Jap, Sandy D., Diana C. Robertson, Aric Rindfleisch, and Ryan Hamilton (2013), “Low-Stakes Opportunism,” Journal of Marketing Research, 50(2): 216-227. https://doi.org/10.1509/jmr.10.0121

Kuhfeld, Warren F., Randall D. Tobias, and Mark Garratt (1994), “Efficient Experimental Design with Marketing Research Applications,” Journal of Marketing Research, 31(4), 545-557. https://doi.org/10.1177/002224379403100408

Lawrence, Justin M., Andrew T. Crecelius, Lisa K. Scheer, and Ashutosh Patil (2019), “Multichannel Strategies for Managing the Profitability of Business-to-Business Customers,” Journal of Marketing Research 56(3): 479-497. https://doi.org/10.1177/0022243718816952

Palmatier Robert W., Lisa K. Scheer, and Jan-Benedict E.M. Steenkamp (2007), “Customer Loyalty to Whom? Managing the Benefits and Risks of Salesperson-Owned Loyalty,” Journal of Marketing Research 44(2):185-199. https://doi.org/10.1509/jmkr.44.2.185 

Stremersch, Stefan, Allen M. Weiss, Benedict G.C. Dellaert, and Ruud T. Frambach (2003), “Buying Modular Systems in Technology-Intensive Markets,” Journal of Marketing Research 40(3): 335-350. https://doi.org/10.1509/jmkr.40.3.335.19239