Skip to Content Skip to Footer
How to Overcome the 3 Biggest Hurdles of Marketing Experimentation

How to Overcome the 3 Biggest Hurdles of Marketing Experimentation

Daniel Burstein

casual chemistry

The best relationships have a give and take. Likewise, the best tech stack for your business should be a two-way street. Companies get value from being able to microtarget, personalize and automate with martech, but like going to lunch with a friend and telling them a good story, you don’t get up and walk away when you’re done talking. You sit and listen to what your friend has to say.

That’s why technology solutions and services that facilitate customer discovery—social media marketing, web analytics and optimization—are this year’s top investments for high-revenue companies, according to a Target Marketing magazine survey of 350 organizations.

Advertisement

Behavior data is one level of customer discovery, but an even more effective practice is influencing what customers will do and seeing how the changes you make affect real customer decisions. This practice can lead companies to better serve their customers. As Lynn Hunsaker wrote on Customer Think, “Thankfully, many companies have been migrating away from product-centric, month-/quarter-end-centric and competitor-centric marketing toward putting the individual or most profitable customer at the center of marketing design and delivery.”

Accurately measuring the effects of your changes on customer behavior requires rigorous experimentation, but experimentation doesn’t necessarily come natural to most marketers. We’re creatives and businesspeople, not scientists.

To squeeze the most value and customer discoveries from your marketing technology, you need to think in a radically different way and overcome the three biggest hurdles that brands struggle with in marketing experimentation.

1. Getting out of a Rut

Marketers often fall into a checklist mentality with their testing. Testing technology is quite simple at its core: It splits traffic to a brand and measures visitor behavior. Even advanced multivariate technology that tests multiple combinations can limit experimentation to changing many similar variables (for example, button colors, headlines or images). If you don’t test the right things, experimenting won’t change much. If you test what you already know, you will not discover anything new. What matters most is what you decide to test.

Marketing experimentation is the perfect time to “have no respect for the status quo,” as Apple’s “Think Different” ad said—to see things differently.

Simple heuristics (thought tools) can help you see different elements of your marketing message in a new light by breaking down the factors that influence conversion. For example, a heuristic we shared in the September issue of Marketing News is the MECLABS Institute conversion sequence heuristic.


One brand that broke out of its box is HealthSpire, a startup within Aetna. HealthSpire wanted to use a landing page to generate leads for its call center. Its goal was to keep the landing page concise and avoid confusion. This isn’t unique to HealthSpire—it’s an assumption I’ve heard from many marketing departments and advertising agencies: Customers want short landing pages. But HealthSpire decided to experiment with the conversion heuristic and test a longer page. HealthSpire hypothesized that the customer would be willing to deal with the increased friction from the longer page in exchange for decreased anxiety and increased clarity of the value proposition. The result was a 638% increase in call center leads.

You’re not likely to see a big increase in performance if the messaging and creative you test is boxed into what you’ve always done. Experimenting is your chance to challenge the status quo.

2. Having Too Many Answers

We have to change not just our opinions on what will work, but our resistance to new ideas as well. We need to approach our jobs in a new way. Like many of you, I’m a professional marketer. Marketers have big personalities and are great at arguing persuasively in pitch meetings, but arguments and guesswork will only get us so far. To be good experimenters, marketers will need to change their thinking; they’ll need to ask more questions and be less sure of their answers to a marketing messaging problem.

Here’s a perfect example: 15 minutes before the start of an event, I got an email from a junior marketer about a recent A/B test. The results had just come in, and his headline beat our CEO’s headline. Our CEO, Flint McGlaughlin, was just about to get on stage and present the opening session for this event. I grabbed him and showed him the results just to tease him. Flint was humble enough to say, “That’s great. Let’s open the event with it,” and proceeded to share the test results from the stage.

Society tells us to act as if we know everything, even if we have to bluff our way through. But the scientific method tells us that true strength lies in forming a testable hypothesis and taking a systematic approach to draw conclusions from evidence.

3. Accurately Answering Your Own Questions

Answers with evidence are very powerful, but you have to ensure that your evidence is accurate. Validity threats can skew the data gathered in your experiments, causing you to (very confidently) make the wrong decision. A validity threat makes it impossible to isolate the variable you’re changing—such as a headline or an entire landing page approach—to measure its effect on customer behavior.

One example of a validity threat applicable to martech is called an instrumentation effect. An instrumentation effect might be a page that took longer to render because of something erroneously loading in the background, problems with the testing and analytics software or emails that don’t get delivered because of a server malfunction. You can’t be certain that it was the change you made to the messaging that caused different results or if it was something in the instrumentation you used to deliver and measure the message.

Another challenge can be the rigor of your experiments. For example, journalist Brett Dahlberg recently reported on NPR​ about the questionable practices of Brian Wansink, head of the Food and Brand Lab at Cornell University. 

Dahlberg writes, “When his first hypothesis didn’t bear out, Wansink wrote that he used the same data to test other hypotheses.” Dahlberg quotes University of Pittsburgh statistician Andrew Althouse, who explains that studying lots of data is fine, but “p-hacking”—when researchers play with data to arrive at results that look like they’re statistically significant—is a problem.

Biases don’t disappear the moment we decide to run an experiment. If we really want to find something, it’s human nature to find it. I drive a Nissan LEAF, and before I owned one, I never noticed LEAFs on the road. Now, I see them everywhere. I could conclude anecdotally that electric vehicle adoption is really taking off, but the likelier explanation is that the data didn’t drastically change, what I was looking for did. 

That’s why it’s not enough to experiment; the way we run those experiments is critical.

“For scientific testing, it’s very important to remove all possible bias that could occur during the experiment. The goal of an experiment is to prove/disprove a hypothesis, not to find a statistically significant result within the data,” says Cameron Howard, data specialist at MECLABS Institute.

Make Data-Driven Decisions with Well-Run Experiments

Experimenting is powerful. It’s the engine that drives our technological revolution. But it’s not enough to just run any marketing experiment and hope to get a result. You have to test elements that will truly affect conversion, take a question-based hypothesis approach and make sure you run a valid experiment to get reliable results.

Daniel Burstein is the senior director of content and marketing at MECLABS Institute. He oversees all content and marketing coming from the MarketingExperiments and MarketingSherpa brands while helping to shape the marketing direction for MECLABS .