When Donald Trump announced his presidential candidacy in June 2015, I began a column with this observation: “It speaks well of Republicans that most of them have no use for Donald Trump.”
That didn’t age well, did it?
In my defense, those words seemed perfectly true when I wrote them. According to polls at the time, 57 percent of Republicans had an unfavorable view of Trump. In fact, six out of 10 likely Republican voters said Trump was a candidate they would “never vote for.” Today, his approval among Republicans hovers near 90 percent.
Which just goes to show the folly of relying too heavily on opinion polls, especially when the public is only beginning to react to something. But pollsters find it irresistible to ask questions about the most attention-getting stories, and we in the media are drawn to the results.
The polling industry has given itself some terrible black eyes in recent years, with woefully botched calls on high-profile elections — not only in the United States (the 2016 presidential election and last year’s Alabama Senate special election), but also in Britain, France, Israel, and Colombia.
In the wake of so many blunders, much attention has been focused on the increasing unreliability of traditional polling techniques. Pollsters have been hurt in particular by the near-ubiquity of cellphones — which by law may not be called using automatic dialers — and Americans’ growing unwillingness to answer poll questions. According to the Pew Research Center, 36 percent of people would answer telephone surveys in 1997; today, just 9 percent will.
But as Karlyn Bowman, a scholar at the American Enterprise Institute, points out in a penetrating new essay for National Affairs, the problems with modern polling aren’t caused solely by these external pressures. “Other, more subtle changes reveal a chasm between pollsters and the public they observe, posing a threat to the credibility and usefulness of polls,” she writes.
One such problem is a growing resort to short-term polling. Between the Internet-driven acceleration of news cycles and the pollsters’ craving for media attention, surveys are designed more and more often to supply immediate feedback — and nothing more. “Pollsters ask questions about a controversial news event to secure coverage,” Bowman writes, “only to move onto the next topic, making it difficult to determine how public attitudes are changing over time.”
With consistent polling on the same subject, a reliable baseline of public opinion can be established. “But if pollsters ask about a subject only at the height of a controversy, as many tend to do now,” Bowman explains, there is no way to know whether their findings reflect stable public opinion or a momentary flare-up.
For example, pollsters asked questions about income inequality and capitalism during the Occupy Wall Street protests. But once Occupy faded away, so did most of the poll-takers. “Income inequality hasn’t gone away,” Bowman writes, “and knowing whether people’s views have changed or stabilized is worthwhile.”
Another self-inflicted problem is the polling industry’s near-relentless focus on politics. In years past, pollsters would regularly pose questions about daily life. Respondents might be asked whether they had been more than 1,000 miles from home, received a speeding ticket, or been robbed. “Reading the . . . poll commentary from decades ago,” Bowman remarks, “one is struck by the tremendous respect these pollsters had for the general public.” Pollsters used to regularly test Americans’ attitudes toward work and patriotism. Now, except for Gallup, few polling firms gather data to update those trends.
The more pollsters succumb to the hunger for clicks and headlines, the more they damage their indispensable core function, which is to determine what Americans really think. When surveys are ordered up with the expectation that they’ll get a mention on CNN or Fox News, the integrity of those surveys can’t help but be compromised. We in the media don’t make matters better when we play along with the game. We make much of poll findings that are nothing more than a snapshot of public opinion, with no insight into what those opinions mean or why they have (or haven’t) shifted.
Tellingly, polls today contain far more questions freighted with emotional terms — “angry,” “lying,” “compassionate” — than they used to. The sense they convey is of a public buffeted by a storm of feelings. That makes for a lot more clickable tweets, but a lot less meaningful understanding.
Good polling data is a vital research tool, but through these and other distortions, pollsters are undercutting their own vitality. Pollsters will find a way to solve the problem of cellphones and response rates. But not every problem has a technological solution. The polling industry has a deeper problem. It is losing touch with the very people it is supposed to help us understand, and it cannot fix that unless it fixes itself.