Not every pollster blew it this election. It was a very bad cycle for the media networks and their partner major daily newspapers and for many of the universities who engage in high profile election polling. On the flip side, groups like Rasmussen, Trafalgar, Susquehanna, and our John Zogby Strategies did quite well.
The chief problem is not a new one. Too many polls rely on bad samples – i.e. those whose turnout models fall far short of the actual Election Day turnout demographics. Most notable is the tendency to oversample Democrats and under sample Republicans. This is hardly a new issue. I began to see it as early as 1994 in the New York Gubernatorial race and then throughout most of the Presidential races since 1996. Some of the problem is that too many pollsters are located in New York, Washington or on university campuses – all spots where a Republican is an occasional “sighting”, not a substantial reality. Hence, it all begins and ends in a context of insularity where cliches abound. Republicans then become folks who “have their guns and religion” or are seen as a “basketful of deplorables”.
The other failure is that too many independent pollsters just don’t take voters’ party identification as seriously as they should. In this election, when 95% of Republicans supported Donald Trump and 94% of Democrats supported Joe Biden, it matters a whole lot if one group is over-represented in your sample and the other group is not represented enough. Too often during the entire election cycle some of the best name pollsters would report results with as many as 44% Democrats and as few as 28% Republicans in their samples. Small wonder their results would reveal a double-digit lead for Biden. While at times the party differentials would be narrower, the narrative simply continued that there was a wide chasm between the numbers of Democrats and Republicans in the US.
In my history, I have always opened the door to applying weights to my sampling for party identification fully recognizing that Democrats are more likely to respond to surveys. It is not new that self-identified Republicans have been reluctant to answer telephone surveys because of wariness of a liberal elite or because talk radio and now social media alert them to not trust polls. But I must tell you that never have I had in any of my pre-weighted samples party identification discrepancies that forced my weighting anything more than a point or two here and there. I use previous exit polls in similar races to capture a benchmark for what turnout has looked like and follow it up with current trends. For example, is there any reason to believe that there will more or less Democrats or Republicans, more or less blacks or Hispanics, more or less evangelical Christians and so on? This is the artistry involved in polling, but it is rooted in long years spent enumerating, capturing the heart and soul, and defining the drivers of real voting patterns. The very fact that I live in a neighborhood and region that features Republican and Democratic lawn signs and listen to talk from friends and neighbors who passionate about both sides keeps me well grounded. Thus, in the last few presidential elections, there was an average of 37%-38% who identified as Democrats and 34%-35% who said they were Republicans. We knew going in that enthusiasm was high on both sides. We knew that there was a high voter turnout among subgroups like blacks and evangelicals. And we knew that voter registration levels were high among women and Hispanics. So, we felt comfortable making our samples reflect a 38% to 34% Democrat to Republican ration. Incidentally, one key way to measure enthusiasm is not simply to ask it but to see how many undecided voters there are within key subgroups. With only 1%-3% saying they were not sure who to vote among these key groups we knew they were poised to turnout to vote. According to the exit polls, actual turnout was 37% Democrat and 35% Republican.
In short, understanding that party identification is a lead variable in people’s lives and must be treated as an important demographic as opposed to a “trailer variable” (to use a favorite term of my colleagues) is vital to our profession moving forward.
So too is the need to be innovative in a world where technology is dynamic and impacts social responses. Our polling industry is governed by the principle that “we are always going to conduct business the way we have always done it”. Ridiculous.
That leads to the polling industry’s unwillingness to adapt to new technologies. When I started in 1984, almost everyone had a landline and it both socially welcome and culturally acceptable to answer the telephone. Response rates averaged 65%. We have plenty of warning over the years that the telephone was becoming less and less of a research-friendly too. Response rates can be as low as single digits when calling landlines and infinitesimal when calling cell phones. Now that internet access is near universal among likely voters, respondents can answer polls at times when it more convenient, and when samples yield a better distribution of otherwise hard-to-reach groups like young people and nonwhites, the mainstream (let’s just say the establishment) pollsters continue to fight new technologies.
One criticism I have always had of some of my colleagues is that in their search for academic objectivity they are often reduced to asking completely neutered meaningless questions. They lack the human element when asking about politics and policy. Some of us are on a quest to find out the real drivers for voting and behavioral decisions. What are the real values that make people do what they ultimately do? Remember, we are polling people not simply data.
Finally, we pollsters ourselves need to set realistic expectations for what polls can tell us and what they cannot. We have to stop this silliness that we can predict final outcomes down to the tenth of a percentile. What we should be communicating is whether or not a race is close, which groups tend to support one candidate or another, what needs to happen for one candidate to surpass another, and what the trend line shows. In 2016, the polls were just fine. We saw Hillary Clinton’s lead go from double digits ten days before the election into a daily downward spiral to bare leads, ties, and even a negative versus Donald Trump the day before. Just because the final published did not get the exact number in the final race does not mean in any way that those polls were not suggesting a potential outcome that mattered.
I am writing as someone who has been much more than not on the most accurate side of presidential polls. For the record, after repeated polls showing a much closer race in 2020, the John Zogby Strategies Poll ended with a 5.6 percentage lead for Biden over Trump. Final numbers will reveal somewhere around at least a 3.3-point edge or even higher. Polls that showed a double-digit lead were not only wrong, they were seriously misleading. It is time for a serious self-assessment of what went wrong with those polls. I hope this piece helps to generate that review.