Why Were the Pollsters Wrong Again in 2020?

Former vice-president and Democratic presidential nominee Joe Biden accepts the Democratic Party nomination for US president during the last day of the Democratic National Convention, being held virtually amid the novel coronavirus pandemic, at the Chase Center in Wilmington, Delaware on August 20, 2020. (Photo by Olivier DOULIERY / AFP) (Photo by OLIVIER DOULIERY/AFP via Getty Images)

We all know that the pollsters got it wrong when they forecast Hillary Clinton victory over Donald Trump in the 2016 presidential election. In fact, their error was not trivial, and hence, post-2016, many in the polling industry appear to have done a lot of soul-searching and studying to ensure that a mistake of this magnitude is not repeated.

Now let us fast forward to 2020.

Once again, a variety of pollsters predicted a decisive victory for Joe Biden over President Trump. As The Times of London appositely pointed out, many of the pollsters who predicted that there would be a “blue wave” ended up with red faces.

To give one specific example, the vaunted Economist’s regularly updated poll suggested one day before the election on November 3 that Biden would win 350 electoral votes with greater than 90 percent probability and that the Democrats would get 52 seats in the Senate with higher than 75 percent probability. The presidential race has now been called broadly in favor of Biden but it is clear that the best that he will do is to get 306 electoral votes. So, why do pollsters keep getting it wrong?

Sampling Issues

One reason has to do with determining the degree to which a sample of contacted voters is representative of the larger group about which the pollster is seeking information.

For instance, does a randomly sampled list of 1,000 African-American women in Colorado that a pollster contacts truly represent all eligible African-American voters in Colorado? On a related note, is the number 1,000 sufficiently large, or should the sample size be increased?

The key point to comprehend is that these are sampling issues. As such, even if a pollster does not get the sample right in a given instance, there is an established body of work in sampling theory that can be drawn upon, at least in principle, to fix the underlying problem or problems.

Human Behavior

A problem that is much harder to fix is well-known to economists and this concerns human behavior. Simply put, the issue is this: will an individual, when contacted by a pollster, truthfully reveal whether he or she plans to vote for Biden or Trump?

Because Donald Trump is a broadly unpopular candidate, many individuals do not have an incentive to answer truthfully and thereby reveal to the pollster that they plan to support an unpopular candidate and potentially be judged to be a racist or worse.

This kind of non-truthful response can certainly arise if a poll is conducted in-person and it can also happen over the phone.

President Donald Trump in the Brady Briefing Room of the White House. Photo: Jim Watson/AFP via Getty Images

Writing in Politico, Zack Stanton recently referred to this as the “shy Trump voter” phenomenon. Because of the presence of this phenomenon, it is certainly not axiomatic that even a carefully designed poll will lead to the truthful revelation of preferences.

When working with problems involving the design of, for instance, an auction to sell 5G airwave licenses, where the truthful revelation of preferences is important, economists insist that whatever mechanism is designed be incentive compatible.

This means that the incentives the mechanism designer (the Federal Communications Commission in the case of 5G airwave license sales) sets up must be such that the relevant players (mobile phone carriers) want to participate and that they also want to reveal their preferences (about how much they value the airwave licenses) truthfully.

So, with regard to polling, sampling refinements alone, although important, will not yield accurate results.

Until pollsters figure out how to make their polls incentive compatible, it is unlikely that they will systematically produce results deemed to be reliable by the general public. It sure looks like pollsters have something to learn from the practitioners of the so called dismal science.

Related Post