Categories
Research Methodologies
January 17, 2019
Statistical and conceptual models are inherently limited. MR needs to rely on true experiments to understand causality.
Editor’s Note: The term “modeling” can be used in different ways. For example, I had Victoria’s Secret as a client for many years, and I loved telling people who asked what I did for a living that I did modeling for Victoria’s Secret. The term’s use in market research is generally more prosaic, but nonetheless important. It could refer to an informal way of making sense of the world; it could also be something more formally statistical. As Steve discusses, there are important limitations to the effectiveness of models, even in a world of seemingly infinite data, and we need to be thinking more seriously about the use of formal experimentation. An interesting read to ponder over.
Just to be clear, my friend Kevin Gray wanted me to write this. So if you like it, feel free to tell me below. If you don’t like it, blame Kevin (I have his email).
For much of recorded history, we note that humans create models (using the term loosely) of the universe to explain how it works. Religion might have been the first model, attempting to explain what why the universe is like it is. Philosophy, arising out of religion, took the question of why we do what we do as their focus and came up with models. There’s the Aristotelian model of human behavior caused by humours (vapors) in the body, the Cartesian hydraulic mechanistic view of behavioral antecedents, and the Freudian triumvirate of id, ego, and superego and all that goes wrong during toilet training, as explanations for behavior. Psychology continues the philosopher’s task; we psychologists have been creating models of why people do what they do since our discipline’s infancy.
The twist that psychologists added was to internalize the process of creating models to represent the outside world. Fritz Heider, writing in the 1950s, talked about people as “naïve psychologists” who strive to attribute causality in order to structure their thinking about people and things. You, the reader, might be more familiar with another approach, attributed to Daniel Kahnemann, in which we supposedly create models of the world (heuristics) that use much less “energy” and take much less time to consider. These models simplify our interactions with the world, making it more automatic, thus giving us the time, space, and resources to ponder the more important questions in life like, “How did the Patriots ever lose to the Eagles in the Super Bowl last year?”
Marketing Research has always operated with a model in mind, even though we are often poor at recognizing them. A simple example; up until recently, we had a “push” view of advertising with a passive recipient being pummeled with ads. Now we think, maybe, there’s “pull” involved too, where people seek more information about a product or service. Your viewpoint vis-à-vis a push or pull model will dictate how you think about advertising, what kind of data you look for to understand it’s impact, and how you interpret the outcome of an ad campaign. Whether we are stealing (usually incorrectly) from Kahnemann’s work, trying to understand what is in Big Data sets, or pretending that machines are much smarter than humans and relying on Artificial Intelligence to show us the way, we are using a conceptual model.
Here’s the problem (sorry it took me so long to get there) – models are inherently wrong. I didn’t make that up – real scientists like George Box and John von Neumann said it first. Some are more wrong than others, but all representations of reality contain some error. As we progressed from Copernicus to Newton to Einstein we learned that the forces of nature are not what we thought they were and that we needed a new model. The good news is that each evolution (or revolution) takes us closer to “truth”, reducing the error in our model: at least, that’s how science is supposed to work.
Statistical models have the same sort of progression and the same limitation. As we seek to understand more complex behavior using more complex data, we’ve advanced from regression through Bayesian probability to more esoteric forms of statistical modeling. While these statistical models might be better at teasing the truth out of a set of data, we need to remember that they are subject to the same limitations as conceptual models. They are time-bound, they are range-bound, they are paradigm-bound, and rarely do we ever really meet the conditions in MR to satisfy their underlying assumptions.
How do we know that our models, either conceptual or statistical, don’t work? We keep on telling each other that we aren’t very good at predicting future behavior. It doesn’t matter whether you believe the new product failure rate is 70%, 80%, or 90% – it’s not 10%, which is about where it should be if our research tools were good. I won’t even get into polling, which (a) is dismal in a good year and (b) I’m not sure I’d even include under Marketing Research. Predictive Analytics will have some good hits in the future and then the models will collapse under their own inadequacies. Predictive models are not built as a simulation of the world; they are only built to predict, and that is a guarantee of failure somewhere down the road. You heard it here first – AI will disappear as a driving model for our businesses because nobody actually understands its underpinnings.
We can do a better job in the sense that we can be more accurate and more reliable. It doesn’t take a new model or a new set of statistics – we actually used to be pretty good at this, which is how we’ve been able to grow to the size we have. We need to bring back experimentation as a basic tool in MR. We used to experiment all the time but then smart-ass marketers started thinking they didn’t need research when they had their own educated guts to go on (one senior marketer actually said this at an ARF conference). We researchers had far fewer failures when we regularly tested marketing actions before they were introduced. Whether it is through controlled-store tests, matched-market tests, test markets, or virtual shopping studies, we are able to reduce uncertainty and provide our clients with demonstrably actionable results. Isn’t that what marketers are looking for?
Can’t fancy statistical models do this? Maybe yes, maybe no – our evidence to date would suggest not. How can you tell if your model’s results are good? You often can’t. For example, many of the models developed by AI do not come with goodness-of-fit measures, meaning “you pays your money, you takes your chances”. Experiments can be fast and inexpensive and can provide a level of certainty and predictive validity that modeling approaches cannot.
As my friend and colleague Bill Bean is fond of saying, “There are lots of ways of knowing the world…science is one of them. And a good control group is hard to beat”.
Author’s note: Kevin just recently posted his version of why experiments are good things – you can find his post here.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Steve Needel,
Since the start of COVID, marketing research blogs have been forecasting dire consequences for the industry. Enough already.
An argument for the importance of quality over quantity in data.
A rebuttal to the call to return to simple linear data models.
A rebuttal by Steve Needel defending the validity of significance testing.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.
67k+ subscribers