Despite what you may have heard, Donald Trump isn’t winning the Republican primary. He has zero votes, just like everyone else. All the talk about front-runners, surging rivals, and underperforming insiders is based solely on polling. And polls aren’t votes.
Neither are polls particularly accurate. Last year’s UK election was expected to be a nail-biter, until voters handed conservatives an easy victory. In the United States, 2014 polls suggested a tight race for control of the Senate, but Republicans ran away with it.
Until caucus-goers in Iowa start making their choices official, we won’t know whether Hillary Clinton’s polling advantage was a mirage, or whether Trump’s towering lead had a shaky foundation.
Why are polls often wrong?
Polling is a difficult, error-prone art. Remember the famous Chicago Tribune headline, “Dewey defeats Truman”? That’s what Gallup was predicting in 1948. Twelve years earlier, the respected Literary Digest straw poll predicted a big win for Alf Landon over Franklin D. Roosevelt.
Why such inaccurate results? Presidential elections often hinge on just a few million ballots, in a nation with hundreds of millions of eligible voters. Polling groups are able to track the national mood and tease out likely results by surveying a few thousand people — or sometimes just a few hundred.
When it’s done right, it can provide a reasonable gauge of political sentiment.
To accomplish such a feat, pollsters need to complete a number of statistically subtle and logistically difficult tasks. For one, they need to find a representative sample, meticulously identifying a small number of people who think and vote exactly like the hundreds of millions of potential voters.
On top of that, they need to figure out who is really going to show up on Election Day. And asking won’t do, because there are a lot of people who say they’re going to vote and then skip out.
Put these challenges together, and accurate polling starts to look like a mathematical miracle.
Is polling getting better?
Actually, polling has become a lot harder, for a variety of different reasons.
■ People don’t want to answer questions. Time was, survey groups could expect 80 or 90 percent of people to answer the phone and share their views. These days, response rates are often below 10 percent, which makes it much harder to put together a good, representative sample.
■ Landline phones have disappeared. Conducting a poll using cellphones is much trickier. For one thing, you can’t rely on area codes to target people in specific regions, since people take their numbers when they move around. In addition, pollsters aren’t allowed to auto-dial cellphone numbers the way they can with landlines, making cellphone polling more labor intensive and more expensive.
■ The Internet isn’t representative. While online surveys are getting better, it’s still hard to put together a random, representative sample. There’s just no Internet equivalent to the old practice of “dialing a bunch of random phone numbers.” Plus, there’s a weird age mismatch, since Internet users tend to be younger, while voters tend to be older. As an example of how such problems play out in real life, note that Trump does better in Internet polls than in telephone-based ones, and no one knows which approach is the more accurate.
■ Pollsters tweak their results. If everybody else shows a candidate up 20 points, and your new poll has him down 5, it’s tempting to dismiss your result as a glitch and adjust your sample or assumptions to correct it. But what if your chief rival’s latest poll also shows the same candidate down, and they, too, make adjustments to match the general consensus? Pretty soon, everyone is fudging results to conform with other results which have also been fudged. This risk is referred to as herding, and it seems to be particularly likely in the days just before a big vote.
These seem like technical problems. Are there deeper concerns?
Consider this example from the 2014 Massachusetts gubernatorial primary, the race to see which Democratic candidate would square off against Charlie Baker for governor.
At the time, political science professor Jerold Duquette argued that the polls had become self-reinforcing: They showed Martha Coakley with a commanding lead, which made opposition seem futile, which led non-Coakley supporters to tune out or say they wouldn’t vote, which then strengthened Coakley’s lead in the polls.
When Election Day finally arrived, Coakley barely won. And that raises a vexing question: What would have happened in a world without polls? Perhaps the people discouraged by Coakley’s indomitable polling numbers would have turned out in greater numbers, tilting the election in favor of a rival.
How accurate are the 2016 presidential polls?
It’s impossible to know. That’s the really startling thing about this topic. All the hype, all the horse-race coverage, all the strategy and speculation are based on poll numbers that may be far less solid than they appear.
Even a bias of 3 or 4 percentage points could flip fortunes on the Democratic side or boost an overlooked Republican into new prominence. Bigger errors could produce greater turmoil.
For pollsters, the day of reckoning is fast approaching. The early contests are just weeks away, and the results should tell us whether months of perpetual polling have helped to explain the real nature of our nation’s political divides — or distracted us, as we waited for the real race to begin.
More from Evan Horowitz: