Brand Strategy

February 19, 2021

The Troubling Legacy of the 2020 Election Polls

Why did the U.S. election pollsters get it wrong again? Can political polling learn from the past?

The Troubling Legacy of the 2020 Election Polls
Terry G.

by Terry G.

Partners at Customer Experience Partners, LLC.

In our November 4, 2020 issue of our newsletter, Insights (“Polling Research: They Got It Wrong? Right?”) we expressed a contrarian point of view, that the election polls would likely get it wrong, again. We didn’t wish our public opinion colleagues trouble, we simply questioned their ability to appropriately deal with the dramatically fractionating social strata of our Country. Unfortunately, our pessimism was born out with inaccurate and misleading survey results. The results weren’t as wrong as 2016’s, they did predict the national outcome; a victory for Joe Biden, Jr. But instead of the predicted landslide, President Biden beat President Trump by less than two percentage points in the states that decided the election. And, in some states, the polls got it terribly wrong.

In this article, we reflect on election polling’s revelation of the increasing problems confronting the public opinion and marketing research industries in today’s complex world.

 

A Brief Historical Perspective

As the most public litmus test of survey research, the election polls have a checkered past, at best. Consider:

  • Media involvement with election polling came into prominence with the 1916 election. For this election, the Literary Digest conducted a mail survey among its readers in 3,000 communities across the Country. The poll successfully predicted that President Woodrow Wilson would win reelection against his opponent, Charles Evans Hughes.
  • With Wilson’s election, the Literary Digest poll became somewhat of a national sensation. And it continued to correctly identify winners from 1920 through 1932.
  • However, the magazine’s 1936 prediction that Alf Landon would handily defeat Franklin D. Roosevelt set the polling industry on end when Roosevelt defeated Landon by a landslide. Though the magazine’s sample was impressive, with 2.4 million participants, it failed to represent the Country. It was skewed to more affluent Americans, those more hostile to Roosevelt and his New Deal platform.
  • Political polling’s most monumental gaffe was the 1948 public embarrassment in forecasting Thomas Dewey’s win over President Harry S. Truman. This error led to much internal examination within the polling industry, with a focus on building even more representative samples.
  • The most recent failure was the 2016 polls which all but assured Hillary Clinton the next four years in the White House.

Postmortems of all three failures suggested certain methodological oversights, but as each oversight was subsequently addressed, new anomalies presented themselves. The current election poll at least succeeded in correctly identifying the winner of the presidential campaign (in 48 states, not in Florida nor North Carolina). The misses punctuate the inability of the American polling industry to fully correct problems discovered in the 2016 failure.

Related

Predicting Election 2016: What Worked, What Didn't and the Implications for MR

 

What Went Wrong in 2020?

2020’s poll predictions are causing an intense self-examination by polling firms to better understand what they still (after 2016’s fiasco) haven’t fully understood or accounted for.

  • Are current contact methods outdated? Should text messaging and other higher-tech communication tools be given a more central role in respondent recruitment and tally?
  • Should the media’s reporting of early poll results be better qualified or couched? While the public (and politicians) are both highly involved in reports of results, perhaps the media should give them less prominent coverage. Unconditionally all survey results should be accompanied with a broader caveat which acknowledges the confidence intervals inherent within reported survey results and the possible impact of statistical variation.
  • Response rates to opinion polling have taken a nosedive. Pew Research reports that typical response rates to telephone opinion polling now stand at a paltry 6% down from 50% and higher in the 1980s. Such a low response rate confounds, perhaps even pragmatically negates, the ability to employ weighting algorithms to produce more representative results from the few interviews conducted.
  • Systematic biasing of survey respondents. Cooperators appear to be ideologically different from non-cooperators. Inherent in most survey weighting procedures has been the enabling premise that there are no systematic ideological differences between responders and nonresponders. This tenet is obviously no longer valid.
  • New mindsets seem to be developing within the population. One such group is increasingly distrustful of institutions. These individuals may either be less willing to participate in surveys or may participate but attempt to contaminate the pollster’s findings by stating positions that are contrary to those they actually hold.

 

The Challenge Ahead

Stepping outside the political polling arena, these observations raise severe warnings for the marketing research community. We understand few of our readers are involved in public opinion polling, per se. But the opportunity for learning and improvement are present. It would probably be naïve to state that the consequences of incorrect survey results are more important in business than in politics. And yet, because of the dollars associated with a new product rollout or the budget for a new pool of ads, perhaps marketing researchers view their polling responsibilities a bit differently from their public opinion colleagues.

In any event, marketing researchers, who have dealt with the problem of lowering response rates and evolving contact technologies need to further consider the implications of those systemic societal changes which have derailed the election polling industry. One of the most challenging appears to be systematic differences between responders and nonresponders. The infrequent practice of trying, through multiple contact methods, to interview a small sample of nonresponders appears more critical for the future.

Library of Congress, Prints and Photographs Division, NYWT&S Collection, [LC-DIG-ppmsca-33570]

electionsmarket research for politicspoliticspollingsurvey data

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Terry G.

Lies, Damned Lies and Survey Research
Research Methodologies

Lies, Damned Lies and Survey Research

Data doesn’t have any pure meaning; it must be interpreted.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers