2012 ART Forum Brochure (Click to Download .PDF)
Sunday, June 24th
A. 7 Summits of Marketing Research
Greg Allenby, Ohio State University
Jeff Brazell, The Modellers
Greg and Jeff will discuss and distribute their new book "Seven Summits of Marketing Research: Decision-Based Analytics for Marketing's Toughest Problems." The summits are 1) Market Definition; 2) Market Segmentation; 3) Customer Satisfaction; 4) Product Analysis; 5) Pricing Analysis; 6) Advertising Analysis and 7) Optimization. Their session will concentrate on Chapters 4 and 5 of the book, showing how decision-based analytics can be used to estimate the relationship between needs and wants in a product category; and how this relationship can be developed within the context of discrete and volumetric demand modeling. The book comes with an interactive decision tool that can be downloaded for free.
B. Advanced Computer Simulations for Improved Marketing Decisions
David G. Bakken, KJT Group
Most market researchers who attend ART Forum are familiar with the use of basic computer simulation methods in conjoint analysis. This tutorial goes well beyond the simple “point-in-time” simulation methods employed in these spreadsheet-based conjoint simulators, introducing attendees to dynamic simulation methods. These include systems dynamics models, micro-simulation, discrete event models, and agent-based models. In addition to simulation concepts, this tutorial includes examples and practice in implementing simulations in Microsoft® Excel.
Dynamic simulation models aid decision making by providing insight into many possible futures. Computer simulations can handle many more variables at a time than the average human decision maker. Dynamic simulations incorporate time-dependent effects as well as interactions between components of the target system.
C. Introduction to R for Marketing Researchers
Guy Yollin, r-programming.org
Christopher Chapman, Google
Introduction to R for Marketing Researchers will be a comprehensive, hands-on introduction to the R language and environment for statistical computing and graphics with a focus on marketing applications. The course will cover the basics of the R environment and R programming language as well as statistical data analysis and plotting. Once this foundation is laid, regression analysis using a variety of techniques will be explored. The course will wrap-up with a review and discussion of using R in marketing research.
Note: No previous experience with R is required but a background in computer programming and statistical analysis is expected. Participants should bring a laptop with WiFi access for this tutorial.
D. Introduction to Text Mining and Classification
Stuart Shulman, University of Massachusetts Amherst
This tutorial provides theory, methods and software training for that advances human annotation, gold standard creation, online collaborative analytics, filtering, text mining, social media archiving, and machine classification research. The training links these worlds via easy-to-understand explanations of open source and proprietary software solutions that can be tailored for all experience levels and industries.
E. An Introduction to Probability Models for Marketing Research
Peter S. Fader, The Wharton School of the University of Pennsylvania
Bruce G.S. Hardie, London Business School
Central to a complete understanding of today’s “leading-edge” market research techniques is a sound intuitive appreciation of the basic foundations upon which these sophisticated tools are built. For example, both hierarchical Bayes models and latent class models build on simple probability modeling concepts (e.g., zero-order choice process, Poisson counts, conditional expectations, and exponential interpurchase times) — yet how many researchers are comfortable at precisely defining these concepts or explaining the motivation for using them?
This tutorial aims to fill in these gaps by bringing practitioners fully up to speed on the basic methods that may underlie many of their current or future research activities. Our two broad objectives are (1) to review the basic terminology and logic associated with the area of probability models as applied to marketing research problems, and (2) to develop participants’ skills through a set of case studies that demonstrate the model building process in detail. We will illustrate all of the steps required to develop a probability model, estimate its parameters, and interpret the results. Careful and extensive use is made of the Solver tool in Microsoft Excel, which makes it possible to construct all of these models within a familiar spreadsheet environment. By the end of the tutorial, participants should be quite comfortable with all of the aforementioned principles and models and the managerial issues that surround them.
F. Controlled Experiments on the Web: Planning, Running, and Analyzing
Ron Kohavi, Microsoft
Roger Longbotham, Microsoft
The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, and MultiVariable Tests (MVT). The tutorial will cover an introduction, many actual experiments, cultural challenges, theoretical aspects, and the pitfalls of running online experiments in practice, based on hundreds of experiments that the presenters were involved in at Amazon, Microsoft’s Bing, MSN, Office, and Xbox.
G. Fast Simulators with RExcel
Jack Horne, Market Strategies International
Market research simulators are often delivered using Microsoft Excel. While this software provides an enormous range of functionality, especially when combined with Visual Basic, sometimes it is not enough given either large amounts of data or specific algorithmic needs. The power of R to rapidly process large data sets and to go well beyond the mathematical capabilities of Excel can be leveraged into Excel-based simulators through RExcel. This tutorial will demonstrate the use of RExcel by introducing some basic syntax needed for the R à Excel interface and examining several Excel-based simulators. Minimal knowledge of R and Excel will be assumed.
H. Market Segmentation: Conceptual and Methodological Foundations
Wagner Kamakura, Duke University
Market segmentation is an essential component of any marketing strategy, and is a required consideration in most marketing-related decisions. As many organizations become more customer-focused, they also use segmentation as an important basis for developing their customer relationship-management strategy.
This tutorial will start with the conceptual foundations of Market Segmentation, reviewing the requirements for effective segmentation, and the different forms of market segmentation. However, the major emphasis of this tutorial will be in providing participants with a clear intuition about how the basic methods for market/customer segmentation work, what are their advantages and limitations. This will be done through the discussion of illustrative examples and real applications of the methodology for market and customer segmentation based on life-style, life-cycle, choice-based conjoint, customer behavior and share-of-wallet. Among the methods and models to be discussed are: K-means Clustering, Latent-class Analysis, Regression Mixtures and Multinomial-Logit Mixtures.
This tutorial will draw on the material from the book of the same title by Wedel and Kamakura, but with less emphasis on the technical details, and with new methods and applications.
Monday, June 25th
Session 1: Topics in Online Data Analytics (David Bakken, Session Chair)
MINE YOUR OWN BUSINESS - MARKET-STRUCTURE SURVEILLANCE THROUGH TEXT MINING
Oded Netzer, Columbia Business School
Ronen Feldman, Hebrew University
Moshe Fresko, Hebrew University
Jacob Goldenberg, Hebrew University
Web 2.0 provides gathering places for internet users in blogs, forums, and chat rooms. These gathering places leave footprints in the form of colossal amounts of data regarding consumers’ thoughts, beliefs, experiences, and even interactions. In this research, we propose a text mining approach for firms to explore online user-generated content and “listen” to what customers write about their and the competitors’ products. Our objective is to convert the user-generated content to market structures and competitive landscape insights. We demonstrate this approach using two cases, sedan cars and diabetes drugs, generating market-structure perceptual maps and meaningful insights without interviewing a single consumer.
LISTENING IN ON ONLINE CONVERSATIONS: MEASURING BRAND SENTIMENT WITH SOCIAL MEDIA
Wendy W. Moe, University of Maryland
David Schweidel, University of Wisconsin - Madison School of Business
Chris Boudreaux, Converseon, Inc.
In this research, we investigate the potential to “listen in” on social media conversations as a means of inferring brand sentiment. Our analysis employs data collected from multiple website domains, spanning a variety of online venue formats to which social media comments may be contributed. Our proposed approach provides an adjusted brand sentiment metric that is highly correlated with the results of an offline brand tracking survey. This is in stark contrast to the virtually non-existent correlation between an average sentiment measure derived by aggregating across all social media comments and the same offline tracking survey.
THE EFFECTIVENESS OF DISPLAY ADS
Tim Hesterberg, Google
David Chan, Google
Rong Ge, Google
Ori Gershony, Google
Diane Lambert, Google
Ashin Mukherjee, University of Michigan
Ye Tian, Iowa State
Display ads proliferate on the web, but are they effective? Or are
they irrelevant in light of all the other advertising that people see?
We describe a way to answer these questions, quickly and accurately, without randomized experiments, surveys, focus groups or expert data analysts. Causal modeling protects against the selection bias that is inherent in observational data, and a nonparametric test that is based on decoy outcomes provides further defense. Computations are fast enough that all processing, from data retrieval through estimation, testing, validation and report generation, proceeds in an automated pipeline, without anyone needing to see the raw data.
INVITED TALK: A NEW DAY FOR MARKETING RESEARCH
Eric T. Bradlow, The Wharton School of the University of Pennsylvania
The role of marketing researchers, both in industry and at universities, has long been to help decision makers understand their customers better, and that is not changing. However, the data and tools we have for achieving this goal have seen a rapid change over the past few years. Long gone are the days when monthly sales figures, quarterly satisfaction surveys and the occasional conjoint study were our only sources of raw data. Today we are overwhelmed with new potential data sources bringing with them new opportunities for finding business insights faster and more economically. Using a recent project on using text mining to identify product attributes and levels as an example, Eric Bradlow will discuss opportunities for embracing new data sources and combining them with old favorites to drive better decisions.
Session 2: Advances in Choice Modeling 1 (Jane Tang, Session Chair)
ADAPTIVE BEST-WORST CONJOINT (ABC) ANALYSIS
Ely Dahan, UCLA Medical School
Conjoint subjects choose the best and worst of four alternatives without much more effort than CBC, as Louviere and Orme showed. Our novel adaptive choice conjoint method utilizes best-worst full-profile questioning to instantaneously estimate utility at the individual level with 12+ tasks. A real-world application highlights the method and its benefits.
ESTIMATING THE EFFECT OF VISUALIZATION ON CHOICE BEHAVIOR
Carlo Borghi, SKIM
Paolo Cordella, SKIM
Jeroen Hardon, SKIM
Virginie Jesionka, SKIM
Kees van der Wagt, SKIM
This presentation will discuss how the communication of product characteristics affects consumers’ preferences. Standard conjoint experiments include no communication effect, which is very unrealistic. In markets such as consumer finance, utilities, telecom, choosing how to communicate one's portfolio has a great impact on consumer choice. This effect is ignored by standard conjoint analysis, leading to biased forecasts. We will present experimental results obtained by introducing several communication strategies into conjoint experiments. We will provide the audience with the methodology that allows them to include such effects in the experimental design and with the analytical treatment of the results, suggesting recommendations. To model such effects, we modify the analytical form of the standard multinomial logit model, including scale modifiers to represent communication effects. As a result, we are able to model the change of attributes importance as depending on visualization strategies.
Session 3: New Applications in Hierarchical Modeling (Rex Du, Session Chair)
APPLYING BAYESIAN META-ANALYSIS TO CONSUMER NEEDS ENGINEERING
Mark A. Beltramo, General Motors
Xiaoyu (Stacey) Gu, General Motors
Peter A. Fenyes, General Motors
Artemis Kloess, General Motors
In many product categories, industrial design affects consumer perception and choice. At GM R&D, we build mathematical models that explain how physical aspects of an automotive vehicle affect consumer perceptions and assessments of subjective attributes (e.g., roominess and visibility)—an activity we call Consumer Needs Engineering. These models depend on data collected from multiple product research events where subjects evaluate production and/or prototype vehicles. For logistical and cost reasons, only a limited number of vehicles can typically be evaluated at each event; consequently, most events focus on a small set of competitor vehicles. To build models that apply to multiple product and consumer segments, we use meta-analysis techniques to draw inferences from data collected at multiple research events that vary in their measurement methods, vehicles evaluated and participants recruited. As applied in medical, sociological and psychometric studies, the typical meta-analysis combines estimates of a treatment’s effect over a number of studies. We propose combining data from a number of research events, using a Hierarchical Bayes approach to estimate a nonlinear aggregate response function while allowing for variation between data sources in study design and measurement methods, and between subjects in anthropometry and demographic characteristics. We present an application to the measurement of perceived roominess in a vehicle, but the model can also be applied to consumer perception of other sensory attributes.
A NEW APPROACH TO ESTIMATING HETEROGENEITY IN MARKETING MODELS
Sanjog Misra, Simon Graduate School of Business, University of Rochester
Mitchell J. Lovett, Simon Graduate School of Business, University of Rochester
Sridhar Narayanan, Graduate School of Business, Stanford University
We develop a new approach to estimating heterogeneity in marketing models. The advantages of our proposed algorithm are that (i) it is simple and computationally efficient (ii) it allows the specification to be as flexible as desired and (iii) it provides as output both the estimated heterogeneity density as well as individual level effects. The approach we develop combines recent advances in the statistics and econometrics literature on sieve estimators, importance sampling and stochastic EM type algorithms. We demonstrate our method in a range of simulated and real data contexts and the approach can be used in substantive marketing applications such as segmentation and targeting, new product development, and broader marketing mix decisions.
Tuesday, June 26th
Session 4: User Generated-Content (Timothy Gilbride, Session Chair)
ONLINE USER REVIEWS AND THE EVOLUTION OF PERCEIVED QUALITY
Peter Lenk, Ross School of Business, University of Michigan
Kirthi Kalyanam, Leavey School of Business, Santa Clara University
Arvind Rangaswamy, Smeal College of Business, Penn State University
We consider user-generated reviews and ratings of restaurants where the underlying quality of the product changes over time. Previous studies consider fixed products, such as books or movies. We propose a model to disentangle the time dynamics of restaurant quality from rater heterogeneity. The data consist of a rating of overall quality on a five point scale along with a sentiment analysis from text mining of the written reviews. The sentiment analysis has three quality dimensions: food, service, and ambiance.
EVALUATING PROMOTIONAL ACTIVITIES IN AN ONLINE TWO-SIDED MARKET OF USER-GENERATED CONTENT
Polykarpos Pavlidis, Nielsen Marketing Analytics
Paulo Albuquerque, Simon Graduate School of Business, University of Rochester
Udi Chatow, Hewlett-Packard Laboratories
Kay-Yut Chen, Hewlett-Packard Laboratories
Zainab Jamal, Hewlett-Packard Laboratories
In this project we propose a framework for the evaluation of promotional activities and referrals by content creators for an online platform of user-generated content. Our modeling approach explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for several types of decision interdependence. We implement our model to analyze the demand and marketing management of MagCloud, a print-on-demand service of user-created magazines. Using two distinct data sets, an aggregate-level one from Google Analytics, and an individual-level one from the service's database, we demonstrate the applicability and managerial implications of our approach. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. Most importantly, we provide recommendations to the level of the firm's marketing investments for online marketing, taking into account referral behavior and network effects.
Session 5: Modeling in New Data Sources (Christopher Chapman, Session Chair)
MODELING CHOICE INTERDEPENDENCE IN A SOCIAL NETWORK
Anocha Aribarg, Ross School of Buisness, University of Michigan
Jing Wang, Google
Yves Atchade, University of Michigan
This paper shows how researchers can properly model choice interdependence in a social network where each individual’s choice decision can be influenced by the choice decisions of his or her connected others. We propose a discrete-time Markov chain to model the choice interdependence structure when both choices and their sequence are observed and a Markov Random Field (MRF) when only choices are observed but the choices’ sequence is not. We explain how both models can accommodate multiple relations and asymmetry in choice interdependence. We also show the models can be extended to accommodate heterogeneity in choice interdependence. We delineate a new approximate sampling algorithm to circumvent the intractable normalizing constant problem in estimating the MRF model.
IS WHAT CONSUMERS TELL US IN THEIR OWN WORDS BETTER THAN CLOSE ENDED SCALED MEASURES?
Colin Ho, Ipsos Marketing
Grace Tan, Ipsos Marketing
Market research has been grounded in the use of close ended questions with a strong preference towards scaled measures. Open ended questions, in contrast, have largely been avoided due to the time and cost of manual coding but largely from an implicit belief that open ended responses are qualitative and less rigorous than scaled data. We show that with advances in text analytic software, timing and cost are no longer barriers. Through common statistical analysis used in market research, we show that the managerial insights and data from open ended questions can be as good as and in some cases, superior to those from close ended measures. These findings have important implications for questionnaire design and more broadly, the market research industry.
Session 6: Collecting and Analyzing Survey Based Measures of Perceived Importance (Bruce G.S. Hardie, Session Chair)
GOODBYE TO HALO: DERIVING MANAGEMENT PRIORITIES FROM DRIVER MODELS
Marco Vriens, The Modellers
Kevin Van Horn, The Modellers
Using standardized regression coefficients in situations where attributes are correlated with each other make it difficult to best prioritize where improvement investments should be made. This is a very common scenario. We show how different types of Bayesian modeling can deal with this issue and can result in more valid actionable insights. We also show how marketing directors would use the different insights and how easy/difficult it would be for them to credibly get buy-in into the insights. We specifically compare (1) the relative weight method, (2) a scale usage heterogeneity model, (3) the Halo versus Form approach, (4) a confirmatory factor approach, and (5) a posterior Bayesian analysis approach.
PERCEIVED VALUE ANALYSIS: MEASURING A BRAND’S PRICE COMPETITIVENESS
Dan Wasserman, KJT Group
David G. Bakken, KJT Group
Megan Kaiser Bond, KJT Group
We present an intuitive, robust and easy to implement method for measuring a brand’s “perceived value” (the ratio of perceived quality to perceived price). Perceived value analysis (PVA), in the form of “customer value mapping,” was introduced in the late 1980’s. Most implementations of PVA have relied on interval-scaled ratings of perceived price and perceived quality. Our method uses a constant sum allocation of 11-points across pairs of brands with respect to perceived price and quality. We compare the resulting customer value maps with maps generated using interval-scaled ratings of quality and price competitiveness obtained from a separate sample, and we compare both results with estimates of brand value estimated from a brand/price choice-based conjoint exercise.
Session 7: Customer Lifetime Value (Ranjit Kumble, Session Chair)
INCORPORATING NONRANDOM DIRECT MARKETING ACTIVITY INTO LATENT ATTRITION MODELS
David A. Schweidel, University of Wisconsin - Madison School of Business
George Knox, Drexel University
Latent attrition models have become a standard tool in valuing customers in transactional contexts. The current research contributes to this toolkit by incorporating direct marketing activity into the incidence of transactions, volume of transactions, and latent attrition process. Using the proposed model, we derive estimates of the incremental residual lifetime value attributed to sending additional direct marketing, enabling firms to make resource allocation decisions across its customers.
‘BUY TILL YOU DIE’ TO IMPROVE OPERATIONAL EFFICIENCY: REALIGNING THE SALES FORCE USING BG/NBD
Steven P. Lerner, Merial – A Sanofi Company
Peter S. Fader, The Wharton School of the University of Pennsylvania
Bruce G.S. Hardie, London Business School
Over the course of three years, a concerted effort was made to improve the operational efficiency of the field-based sales force of a major U.S. animal health pharmaceutical manufacturer. The effort focused on the identification and selection of a small subset of the highest value customers within established geographic territories. The use of Customer Lifetime Value estimation (featuring the Beta Geometric/Negative Binomial Distribution (BG/NBD) model) was instrumental in implementing this segmentation process. Key to this endeavor was a model-based “stepwise reduction” in essential customers from over 61,000 in 2009, to 9700 in 2010, to 2400 in 2011 – yet the division achieved healthy growth during this period. From a sales force management standpoint, the organization transitioned from broadly defined “Territory Managers” with over 1900 customers each to much more focused “Key Account Managers” with 70 designated customers each. We discuss the analyses that led to these dramatic changes, the financial results that highlight their impact, and new/ongoing applications of this segmentation/realignment process within the company.
Wednesday, June 27th
Session 8: Advances in Choice Modeling 2 (Elea McDonnell Feit, Session Chair)
A CHOICE-BASED MODEL FOR ORDER OF PRODUCT ENTRY WITH AN APPLICATION TO THE LCD-TV MARKET
Raymond Reno, Market Strategies International
Bob Rayner, Market Strategies International
Jack Horne, Market Strategies International
Many products brought to market are relatively similar to existing products in the marketplace. When a “me-too” product is launched, substitution rather than diffusion may be the predominant process. Since their development in the 1960s and 70s, product substitution models have received relatively little attention in the academic and applied literature compared to diffusion models. This paper aims to update the field by making connections to methods and ideas of relatively recent vintage. The methods are used to assess order of product entry in the LCD-TV market.
CONSUMER PREFERENCE ELICITATION OF COMPLEX PRODUCTS USING SUPPORT-VECTOR-MACHINE (SVM) ACTIVE LEARNING
Lan Luo, University of Southern California
Dongling Huang, Rensselaer Polytechnic Institute
Estimating consumer preferences for complex products has always been an essential challenge to marketing academia and business practitioners. We propose an adaptive question design algorithm using support-vector-machine (SVM) active learning method. We illustrate the algorithm empirically in a web-based digital camera study. We further demonstrate that the proposed method outperforms several existing methods in predicting validation choices. Lastly, we conduct synthetic data experiments to examine the scalability and performance of our algorithm under various problem sizes and varying levels of response errors. Overall, our findings suggest that the proposed approach is promising in effectively eliciting consumers’ preferences for complex products.
PORTFOLIO MANAGEMENT: COMBINING DCM AND SHAPLEY VALUE LINE OPTIMIZATION
Faina Shmulyian, MarketTools, Inc.
Michael Conklin, MarketTools, Inc.
An innovative approach combining advantages of Discrete Choice Modeling and Shapley Value Line Optimization is presented. This approach can be used in markets with large numbers of product variants and categories that exhibit variety seeking behavior. In the framework of the suggested approach, extensive competition can be modeled and taken into account. The method is effective when multiple products attributes, such as price and size, need to be optimized to perform effective portfolio management. A line optimization can be performed on products with significant difference in package size and price levels. This method can be used to generate custom solutions for different market condition and business goals.
Session 9: Paul E. Green Award
This award recognizes the best article in the Journal of Marketing Research that demonstrates the greatest potential to contribute significantly to the practice of marketing research. It honors Paul E. Green, Professor Emeritus of Marketing, Wharton School and S. S. Kresge Professor Emeritus of Marketing, University of Pennsylvania.
FROM GENERIC TO BRANDED: A MODEL OF SPILLOVER IN PAID SEARCH ADVERTISING
Oliver J. Rutz, Michael G. Foster School of Business at the University of Washington
Randolph E. Bucklin, Anderson School of Management at the University of California, Los Angeles
I. Advanced Applications of Hierarchical Bayes Choice Models
Jeff P. Dotson, Vanderbilt University
Elea McDonnell Feit, Wharton Customer Analytics Initiative
The popularity of software packages like CBC/HB and bayesm have made hierarchical Bayes (HB) a standard tool for many market researchers, however many of those who have learned HB through software have never been exposed to the basic theory behind Bayesian inference. This tutorial will give students grounding in that theory, but with a strong emphasis on how that theory applies to a number of practical issues that arise in the advanced application of choice models. Tutorial participants will gain a deeper understanding of how HB choice "works" as well as some concrete 'tricks' that can be applied on their next choice modeling project. Topics covered include:
• Understanding the uncertainty in my estimates (posterior draws)
• Dealing with small sample sizes (informative priors)
• Combining data from multiple sources (error scale)
• How to include information about respondents in my model (covariates in the upper level model)
• Building my own simulator (posterior draws)
• Making accurate predictions of substitution and cannibalization (Independence of Irrelative Alternatives and ways to relax that assumption)
• What's really going on when I 'run' a model and why doesn't it always work? (Metropolis-Hastings algorithms, with specific focus on discrete choice and response models)
• What questions should I ask in my conjoint study? (design of choice experiments)
Note: This is an advanced class on hierarchical Bayes choice modeling. Because of the nature of the class, participants should be familiar with choice models and some exposure to Bayesian statistics and basic programming concepts. Familiarity with the R language is helpful, but not required.
J. Probability Models for Customer-Base Analysis
Peter S. Fader, The Wharton School of the University of Pennsylvania
Bruce G.S. Hardie, London Business School
Customer-base analysis seeks to use information on the history of customer purchase patterns to identify which individuals are most likely to be active (or inactive) customers and to predict future purchasing patterns by those customers listed in the firm’s transaction database. Any researcher hoping to make statements about “customer lifetime value” must deal with these issues, but unfortunately the set of commonly available tools is not well-suited for the task.
This tutorial builds upon the basic “platform” provided in our introductory tutorial to provide a set of techniques and models tailored to address these situations properly. We focus on developing the models entirely in Excel and provide attendees with the relevant spreadsheets and notes on how to implement the models “from scratch”. Our goal is to provide the attendee with tools that can be applied immediately (maybe with some slight modifications) at his/her place of work.
The structure of the tutorial is as follows:
• Introduction to the idea of customer-base analysis
• Overview of the concept of Customer Lifetime Value (CLV) and the presentation of a general framework for its calculation
• Brief review of the probability modeling basics required for model building (e.g., review of binomial, geometric, Poisson, exponential, gamma, and beta distributions; discussion of common mixtures such as the NBD, beta-geometric, and beta-binomial)
• Presentation of probability models that can be used to answer various managerial questions including the calculation of CLV. (The empirical examples come from settings as diverse as e-tailing, the charity sector and media subscriptions)
• Generalizations of the specific models presented in this tutorial making links to the broader modeling literature
A COMPARATIVE STUDY OF LATENT SEMANTIC ANALYSIS AND PROBABILISTIC LATENT SEMANTIC ANALYSIS ON EXTRACTING TOPICS IN PRODUCT REVIEWS
Shimi Naurin Ahmad, John Molson School of Business, Concordia University
Michel Laroche, John Molson School of Business, Concordia University
In this case study, two text mining techniques: Latent Semantic Analysis and Probabilistic Latent Semantic Analysis are compared. Common themes in the product reviews are extracted by using LSA and PLSA among positive and negative reviews. PLSA topics are found to be more interpretable and informative than those of LSA. However, the choice of text mining approaches should be based on the goal of the marketing researcher. Both techniques have advantages.
AN EXPLORATION OF TYPING TOOL DEVELOPMENT TECHNIQUES
John Leggett, Digital Research, Inc.
Kevin Knight, Digital Research, Inc.
At this presentation, the authors will discuss various techniques that are available for developing typing tools to accurately predict segment membership. In addition, comparative results will be shown using the various techniques to highlight the strengths and weaknesses of each technique. The goal of the poster is to help end-users and practitioners determine the best techniques for his or her specific informational and strategic needs.
COMPARING ‘IN THE MOMENT’ AND ‘END OF DAY’ DIARIES FOR IMPULSE PURCHASES: AN ARGUMENT FOR MOBILE DATA COLLECTION
Weston Hadlock, Survey Sampling International
Edward P. Johnson, Survey Sampling International
Carol Shea, Olivetree Research
Traditional diary studies do not fully capture important data on the mood, circumstances and location at the point of purchase. We demonstrate how mobile technology allows ‘in the moment’ data collection for diary studies. We observed significant differences in a parallel test of both diary methods. We also examine potential mode effects on the mobile device.
DEVELOPING AN ADVANCED PREFERENCE BASED SIMULATOR – A FLEXIBLE AND SCALABLE APPROACH USING R AND THE AMAZON ELASTIC COMPUTE CLOUD
Charles Carpenter, Resource Systems Group
Jeffrey Dumont, Resource Systems Group
Nelson Whipple, Resource Systems Group
More advanced simulation models are being developed to better aid in making marketing decisions by combining theories and concepts from many previously disparate fields including artificial intelligence, game theory, and probability theory. While these advanced simulations models may offer new and highly valued insights, they can also come with a significant computational burden. In some cases, the computation burden may be so high as to prevent the model from being used by practitioners who must meet demanding deadlines. This poster seeks to provide practical and actionable insight into developing a Monte Carlo based preference simulator by combining the R programming language with the cloud computing resources of Amazon’s Elastic Compute Cloud Service.
MODELING CHOICES WITH SMALL SHARES
Jingsong Cui, The Nielsen Company
Jing Jin, The Nielsen Company
Julie Dipopolo, The Nielsen Company
How to model choices with small shares is a constant challenge for practitioners of choice models. This is true for different types of choice models, and may pose serious problems for some. This paper illustrates these concerns and compares pros and cons of several approaches to address this issue.
RESPONDENT EVALUATION AND CREATION: A CROWDSOURCING EXPERIMENT
Joseph White, Maritz Research
Michael Kemery, Maritz Research
Crowdsourcing is a means to have a disparate network of individuals perform different activities or tasks with the goal of having convergence to a better solution than would obtain through individual effort. These activities are diverse and may include software beta testing, consumer/industry brain storming sessions, database cleaning etc. Although crowdsourcing has been leveraged successfully in some industries, marketing research is still trying to figure out how to properly utilize it. This paper begins to explore the viability of crowdsourcing in marketing research. We specifically explore the composition of a crowd, its potential role in discrete choice experiment design, and as a method for assessing brand perceptions.
SETTING TARGETS FOR MEASURES OF CUSTOMER SATISFACTION: A COMPARISON OF RESULTS BASED ON BENCHMARKS VERSUS LINKAGE ANALYSIS
D. Randall Brandt, Maritz Research
Sharon Alberg, Maritz Research
Where and how high should managers set the bar for measures of customer satisfaction and loyalty? Two common methods used to address this question are (a) benchmarking, and (b) linkage analysis. In benchmarking, targets are set on the basis of how a firm wants its score(s) to compare to selected competitors, norms, or exemplars. In linkage, targets are set based on the level of satisfaction required to achieve a desired customer behavior (e.g., probability of retention or repeat purchasing).
Our research addresses the question of how targets derived from the above two approaches compare: Does achieving a score that is superior to key competitors ensure desired levels of customer retention or repeat purchasing? How do various benchmark-based targets align with alternative linkage-based targets.
Using research conducted in the hotel and lodging sector, we will show that the two methods frequently do not produce convergent results, and may lead managers to draw very different conclusions regarding the “goodness” of customer satisfaction and related customer experience metrics.
UNDERSTANDING COMPLAINT BEHAVIOR IN WIRELESS SERVICES
Yeolib Kim, University of Texas at Austin
The author examines complaint behavior in relation to actual data provided by a wireless service organization, firm XYZ. A hierarchical logistic model is used to test our means. We built a model with individual level predictors and plan level predictors. In our study, we center our research on customer complaints based on individual customer characteristics, sources of their dissatisfaction, and product type. Results indicate that sex, age, suspension of an account, cell phone plan change, equipment change, contact with customer service, rates for exceeding basic fees, and text messaging have an effect on voicing complaints.
USING INFLUENCE IN PHYSICIAN NETWORKS TO PREDICT RX SHARE
Christopher Perkins, Roche Diabetes Care
The research investigates whether influence in Physician network is correlated to prescription share. In this research the physician network is generated by using de-identified health care claims data. Physicians are the nodes, and two physicians are connected if they both prescribe a pharmaceutical for the same patient. Traditional network metrics are applied to the generated network. The research investigates whether there is a relationship between the node scoring and the market share of prescriptions written by a physician for each blood glucose strip manufacturer.
UTILITY AND ATTENTION – A STRUCTURAL MODEL OF CONSIDERATION
Keyvan Dehmamy, Goethe University
Thomas Otter, Goethe University
Jeff Brazell, The Modellers
Matt Madden, The Modellers
Kevin S. Van Horn, The Modellers
Marco Vriens, The Modellers
We propose a structural model of consideration based on the exclusion of an observed variable causally related to attention (only) from utility maximization. In our model, consumers engage in constrained utility maximization in a discrete-continuous model of demand considering a (sub-)set of discrete choice alternatives. The variables causing attention only, and therefore excluded from utility maximization, are the number of facings (NoF) of a particular SKU and its position on a shelf that consumers choose from in a discrete choice experiment. Under the standard assumption of exogenous preferences, which seems natural at least in well established, stable markets, the NoF and position of a particular SKU may increase the (automatic) attention paid to the offering but cannot alter the preferences governing the (optimal) quantity decision given a consideration set. The NoF and the position of an SKU provide information about its availability but do not act as persuasion devices that alter the utility obtained from a particular SKU. We illustrate our ideas using data from a commercial conjoint study where choice sets are simulated store shelves. We find that our model results in more face valid estimates of shelf space effects than competing formulations that include the number of facings and their position in the utility index of a brand, different implications for optimal shelf-space allocation and improved fit. We conclude with discussing further research possibilities using our identification strategy and the advantages of a structural distinction between persuasion and information.
Volumetric Modeling with Discrete Choice Methods
John Wagner, The Nielsen Company
Robyn Knappenberger, The Nielsen Company
Robert Loos, The Nielsen Company
In this poster we will show our approach to modeling volumetric choice data [among buyers only, omitting changes to penetration]. Our approach utilizes three elements:
1) a survey which captures volumetric data from the respondent,
2) a smoothing process which addresses the shortcomings inherent in the data, and
3) a choice model adapted to cover the volumetric component and capable of capturing non fair-share sourcing.