Categories
Brand Strategy
February 19, 2021
Why did the U.S. election pollsters get it wrong again? Can political polling learn from the past?
In our November 4, 2020 issue of our newsletter, Insights (“Polling Research: They Got It Wrong? Right?”) we expressed a contrarian point of view, that the election polls would likely get it wrong, again. We didn’t wish our public opinion colleagues trouble, we simply questioned their ability to appropriately deal with the dramatically fractionating social strata of our Country. Unfortunately, our pessimism was born out with inaccurate and misleading survey results. The results weren’t as wrong as 2016’s, they did predict the national outcome; a victory for Joe Biden, Jr. But instead of the predicted landslide, President Biden beat President Trump by less than two percentage points in the states that decided the election. And, in some states, the polls got it terribly wrong.
In this article, we reflect on election polling’s revelation of the increasing problems confronting the public opinion and marketing research industries in today’s complex world.
As the most public litmus test of survey research, the election polls have a checkered past, at best. Consider:
Postmortems of all three failures suggested certain methodological oversights, but as each oversight was subsequently addressed, new anomalies presented themselves. The current election poll at least succeeded in correctly identifying the winner of the presidential campaign (in 48 states, not in Florida nor North Carolina). The misses punctuate the inability of the American polling industry to fully correct problems discovered in the 2016 failure.
2020’s poll predictions are causing an intense self-examination by polling firms to better understand what they still (after 2016’s fiasco) haven’t fully understood or accounted for.
Stepping outside the political polling arena, these observations raise severe warnings for the marketing research community. We understand few of our readers are involved in public opinion polling, per se. But the opportunity for learning and improvement are present. It would probably be naïve to state that the consequences of incorrect survey results are more important in business than in politics. And yet, because of the dollars associated with a new product rollout or the budget for a new pool of ads, perhaps marketing researchers view their polling responsibilities a bit differently from their public opinion colleagues.
In any event, marketing researchers, who have dealt with the problem of lowering response rates and evolving contact technologies need to further consider the implications of those systemic societal changes which have derailed the election polling industry. One of the most challenging appears to be systematic differences between responders and nonresponders. The infrequent practice of trying, through multiple contact methods, to interview a small sample of nonresponders appears more critical for the future.
Library of Congress, Prints and Photographs Division, NYWT&S Collection, [LC-DIG-ppmsca-33570]
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Terry G.
Data doesn’t have any pure meaning; it must be interpreted.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.
67k+ subscribers