
The midterm elections showed the US electorate is as divided as ever, but there was unity among one political constituency: pollsters, who (mostly) got it right this time.
Having failed to foresee Donald Trump’s stunning victory in the 2016 presidential election, polling firms changed how they survey US voters. Those changes seem to have paid off, as this year’s polls matched up pretty well with the behavior of actual voters.
There were a few spectacular misses, like Republican Ron DeSantis’s victory over Democrat Andrew Gillum in the Florida governor’s race, and the huge margin that Republican Mike Braun put up over incumbent Democratic US Senator Joe Donnelly in Indiana.
Advertisement
But otherwise, the polls came close to the real-world vote tallies in the great majority of the 435 House seats, 35 Senate seats, and 36 governorships that were on the line Tuesday.
Pollsters said they worked much harder to have a more diverse selection of voters in their samples, across all demographic lines — young, college-educated suburbanites, rural, self-employed with only a high-school education, and so on.
“They were pretty much spot on,” said Nathan Gonzales, editor of Inside Elections, a non-partisan newsletter that offers political analysis based on polling data.
David Dutwin, president of the American Association for Public Opinion Polling, said that even in 2016 claims of inaccurate polling were exaggerated. For instance, pollsters foresaw that Hillary Clinton would win the national popular vote.
“From an historic perspective, it was actually one of the lowest-error elections in a long time,” said Dutwin, with an average error rate of just 1.5 percent for the national popular vote estimate.
But Dutwin admitted that at the state level the 2016 polls were less reliable, in part because many tend to be smaller, with fewer voters surveyed.
“The ‘likely voter’ models that get employed are less sophisticated,” he said.
Advertisement
Moreover, state polls are often run less frequently. Indeed, Dutwin said, during the last five days of the 2016 campaign, no polling was done in key states including Wisconsin and Michigan, which tipped the election to Trump.
This lack of frequent polling might explain this year’s surprise in Indiana, where Braun easily defeated Donnelly.
Spencer Kimball, faculty adviser for the Emerson College Polling Society, noted that Indiana law forbids pollsters from using automated phone calls. All calls must be dialed by a person, making the process much more expensive. As a result, said Kimball, “we don’t even poll in Indiana.”
The decline of the news industry also plays a role, since papers in mid-sized cities once used to be able to afford to hire pollsters to track local races. These days, “media organizations in general just don’t have the budgets,” Dutwin said.
Pollsters also go wrong when they make false assumptions about who will come out to vote. This may explain why so many polls had Gillum ahead in the days before Tuesday.
“I think we overestimated the youth turnout in Florida,” said Kimball, assuming they would flood polling precincts to give Florida its first African-American governor. They didn’t. Instead, Kimball now thinks the Florida electorate “looked more like the presidential electorate” in 2016, which put Florida in Trump’s column.
The exit polls used by the network to declare election victories can also misjudge the electorate. For instance, in 2016, the exit polls used by the major TV networks undercounted the number of white voters without college degrees, the kind who voted for Trump.
Advertisement
David Scott, deputy managing editor of the Associated Press, said that on Election Day 2016 his agency didn’t use exit poll data to announce victories because it was so different from actual vote counts. “We told our decision team . . . take it off the table. We can’t use this,” said Scott.
Edison Research, the company that prepares the exit polls used by a consortium of news media, said it learned a lesson from 2016, and modified its questionnaire to more accurately categorize the education background of interviewed voters, the company said in a statement.
Edison conducts exit interviews in person at a sampling of polling stations around the country. Projections of winners are based on a mix of that information and county- and precinct-level voting returns.
But that approach wasn’t good enough for the AP and Fox News. This year, they worked with NORC, a polling organization based at the University of Chicago, to create VoteCast, a system that uses phone, face-to-face, and Internet surveys of about 130,000 voters in all 50 states. VoteCast mails out postcards seeking volunteers and interviews them either over the phone or via the Internet in English or Spanish. And instead of waiting until Election Day, VoteCast begins polling four days ahead of the vote.
Scott of the Associated Press said VoteCast “just nailed it,” correctly predicting 31 Senate races before polls had closed in any of them. But the VoteCast exit polls missed the Republican win in Indiana. It also predicted a tossup in the Arizona Senate race, which as of this writing is still too close to call.
Advertisement
Some pollsters bristle at the idea that they’re in business to foretell the outcomes of elections. “Using a poll to predict how people are going to behave in the future,” said Patrick Murray, director of the Monmouth University Polling Institute, “is like using a hammer to put a screw in the wall.” Pollsters also ask questions about the issues that matter most to voters, and often it’s this information that is most useful to politicians.
Still, for citizens and candidates, the most interesting thing a poll can tell us is who’s out front. Which is why pollsters try so hard to get it right.
Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him on Twitter @GlobeTechLab.