Skip to Content Skip to Footer
Advertising Effectiveness

Advertising Effectiveness

By Peter J. Danaher

The internet has enabled many business developments, but it has turned media allocation and planning on its head. In traditional mass media like television, advertisers can purchase a commercial slot and expect large audiences.

However, many of those reached are not interested in the advertised product or service, so a large percentage of those exposed to advertising do not respond to the message. In digital advertising, websites containing specialized content (e.g., model airplanes) allow advertisers to display their products to loyal and attentive audiences. In the social media space, Facebook delivers ad content to ideal target audiences by examining the web activity of users and their networks. Paid search advertising sends firms customers who are already “in the market” for their products, as indicated by their keyword use.

Over the past 15 years, television channels have grown in number. But the more significant change has been the exponential growth in websites supporting themselves with advertising, not to mention the rapid uptake of paid search advertising.

Advertisement

Advertisers have moved to new digital media outlets not only because of their ability to target customers, but also their lower cost compared to traditional media. Furthermore, digital media allows firms to connect ad exposures and search clicks to downstream sales, a feature Danaher and Dagger (2013) suggest eludes traditional media. Sethuraman, Tellis, and Briesch (2011) show the most convincing way for firms to demonstrate advertising’s effectiveness is by linking the effort to sales. In turn, researchers can use two methods to assess advertising effectiveness: field experiments and econometric models.

Field Experiments

Targeting and retargeting customers who are more likely to respond to offers, an increasingly common practice, makes advertising appear more effective than it is. Lambrecht and Tucker (2013), in an award-winning Journal of Marketing Research paper, reported a comparison of advertising response between customers exposed to standard banner ads and retargeted banner ads showed the ads displaying products previously viewed were six times more effective at generating sales. However, the consumers receiving retargeted ads had already demonstrated product predilection. The researchers therefore randomly assigned consumers to a treatment group seeing retargeted, product-specific ads and a control seeing generic product category ads. They found the retargeted ads were less effective than the generic ads, as the customers were in different stages of the purchase funnel, and while retargeted ads work well near purchase, they are not effective for the larger group of customers embarking on their search.

The use of field experiments to determine ad effectiveness has subsequently blossomed, with studies using “ghost ads” on Google (Johnson, Lewis, and Nubbemeyer 2017) and Facebook (Gordon et al 2019) to create randomized control groups. For example, Sahni (2016) used a field experiment to show digital ads for one restaurant increased sales at competing restaurants offering similar cuisine.

In every case, these field experiments have shown that advertising effects are often difficult to detect. For example, the study of Facebook ads by Gordon and colleagues (2019) examined 15 campaigns and found that only eight produced a statistically significant lift in sales.

Econometric Models

The studies by Johnson, Lewis, and Nubbemeyer and Gordon and colleagues also highlight the challenges of designing an experiment to assess digital ad effectiveness. Individual customers use the internet in different ways, and providers deliver digital ads via unique online auction processes. Econometric models therefore provide a versatile approach to gauging advertising effectiveness. And while field experiment studies have been limited to examining one medium at a time, econometric models allow researchers to compare effectiveness across several media.

Researchers can use econometric models to examine time series data, such as weekly or monthly advertising and sales records. Dinner, van Heerde, and Neslin (2014) studied traditional and digital advertising’s effects on in-store and online sales for an upscale clothing retailer across 103 weeks. The retailer made about 85% of its sales in-store, and the researchers examined three media: traditional (i.e., total spend on newspapers, magazines, radio, television, and billboards), online banner advertising, and paid search. They found online display and paid search were more effective than traditional advertising. Although firms might expect digital advertising to influence only online sales, the researchers found it also influenced in-store sales.

Researchers can also use econometric models to examine single-source data linking individual-level ad exposure to sales, the strategy employed by Danaher and Dagger in 2013. They examined 10 media types employed by a large retailer: television, radio, newspaper, magazines, online display ads, paid search, social media, catalogs, direct mail, and email. The researchers found traditional media and paid search effectively generated sales, while online display and social media advertising did not.

Multimedia, Multichannel, and Multibrand Advertising

Danaher and colleagues (2020) also used single-source data but extended it to multiple retailer-brands, two purchase channels, and three media (email, catalogs, and paid search). They collected the data from a North American specialty retailer selling mostly apparel, where 80% of sales were in-store. The parent retailer owned three relatively distinct brands operating independently. They collected customer data in a combined database, giving them information on sales for each retailer-brand over a two-year period.

The researchers found emails and catalogs from one retailer-brand negatively influenced competing retailer-brands in the category. Paid search influenced only the focal retailer-brand. However, competitor catalogs often positively influenced focal retailer-brand sales among omni-channel customers. The researchers also segmented customers by retailer-brand and channel usage, revealing customers shopping across multiple retailer-brands and both purchase channels were the most responsive group to multimedia advertising.

Summary

In the contemporary business environment of ever-increasing media channels but static advertising budgets, firms must be able to measure advertising effectiveness. Many businesses have shifted their advertising expenditure toward digital media, but multiple studies show traditional media remain effective.

How do marketing managers decide what is best for their companies? Digital media firms like Google and Facebook offer in-house field experiment methods of examining advertising effectiveness. For multimedia studies, analysts can apply econometric models in any setting where time series or single-source data are available.


Author Bio

Peter Danaher is Professor of Marketing and Econometrics and Department Chair at Monash Business School in Melbourne, Australia. He was recently appointed a co-editor of the Journal of Marketing Research.

Citation

Danaher, Peter J. (2021), “Advertising Effectiveness,” Impact at JMR, (January), Available at: https://www.ama.org/2021/01/26/advertising-effectiveness/

References

Danaher, Peter J., and Tracey S. Dagger (2013), “Comparing the Relative Effectiveness of Advertising Channels: A Case Study of a Multimedia Blitz Campaign,” Journal of Marketing Research, 50(4): 517-534. https://doi.org/10.1509/jmr.12.0241

Danaher, Peter J., Tracey S. Danaher, Michael S. Smith, and Ruben Laoizo-Maya (2020), “Advertising Effectiveness for Multiple Retailer-Brands in a Multimedia and Multichannel Environment,” Journal of Marketing Research, 57(3): 445-467. https://doi.org/10.1177/0022243720910104

Dinner, Isaac, Harald J. van Heerde, and Scott A. Neslin (2014), “Driving Online and Offline Sales: The Cross-channel Effects of Traditional, Online Display, and Paid Search Advertising,” Journal of Marketing Research, 51(5): 527-545. https://doi.org/10.1509/jmr.11.0466

Gordon, Brett R., Florian Zettelmeyer, Neha Bhargava, and Dan Chapsky (2019), “A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook,” Marketing Science, 38(2): 193-225. https://doi.org/10.1287/mksc.2018.1135

Johnson, Garrett A., Randall A. Lewis, and Elmar I. Nubbemeyer (2017), “Ghost Ads: Improving the Economics of Measuring Online Ad Effectiveness,” Journal of Marketing Research, 54(6): 867-84. https://doi.org/10.1509/jmr.15.0297

Lambrecht, Anja, and Catherine Tucker (2013), “When Does Retargeting Work? Information Specificity in Online Advertising,” Journal of Marketing Research, 50 (October): 561-576. https://doi.org/10.1509/jmr.11.0503

Sahni, Navdeep S. (2016), “Advertising Spillovers: Evidence from Online Field Experiments and Implications for Returns on Advertising,” Journal of Marketing Research, 53(4): 459-78. https://doi.org/10.1509/jmr.14.0274

Sethuraman, Raj, Gerard J Tellis, and Richard A. Briesch (2011), “How Well Does Advertising Work? Generalizations from Meta-Analysis of Brand Advertising Elasticities,” Journal of Marketing Research, 48 (June): 457-471. https://doi.org/10.1509/jmkr.48.3.457