Nate Silver is the Kurt Cobain of statistics. Wait, bear with me! On Election Night 2008, Nate Silver went from just another political prognosticator – albeit one whose blog, Fivethirtyeight, was drawing million of page views a week – to a rock star. Many other analysts had, like Silver, predicted a big win for Obama; but Silver’s data-driven system had correctly called every state but Indiana, and had gotten all the Senate races right to boot.
Taking the analogy further--both Cobain and Silver were devoted to cultural practices that had previously been confined to a small, inward-looking cadre of true believers (for Silver, quantitative forecasting of sports and politics, for Cobain, punk rock.) And both proved that if you carried the practice out in public, with an approachable style but without compromising the source material, you could make it massively popular. (I think Fangraphs.com is Stone Temple Pilots in this analogy, but perhaps this goes too far.)
We don’t know yet whether Silver’s forecast of the 2012 election will be as accurate as it was in 2008. But he’s got a long track record of high-quality prediction, both in baseball statistics, where he got his start, and politics, where he made his name. His ambitious new book, “The Signal and the Noise”, is a practical handbook and a philosophical manifesto in one, following the theme of prediction through a series of case studies ranging from hurricane tracking to professional poker to counterterrorism. It will be a supremely valuable resource for anyone who wants to make good guesses about the future, or who wants to assess the guesses made by others. In other words, everyone.
A good prediction, Silver says, isn’t just an assertion of what’s to come. It’s a list of all possible outcomes, together with the probability we assign to each: what mathematicians call a probability distribution. That’s what Silver provides on his blog, now hosted by the New York Times – not just his best estimate of the number electoral votes President Obama will collect (308.1, as I write this) but the probability of each possible future. The most likely outcome, for instance, isn’t 308.1 electors for Obama – that scenario, like any involving a fractional elector, is impossible – but 332, to which Silver assigns a nearly 20% chance. He gives Obama a 76.1% chance of reaching the 270 electoral votes he needs to win the election, and assigns to the nightmare scenario of an electoral tie a mere sliver of probability, 3 in 1000.
If you just want to know who’s winning, this may seem like overkill. But, as Silver demonstrates in example after example, predictors risk disaster when they’re inexplicit or dishonest about the cloud of uncertainties surrounding their forecasts. In the case of the 1997 Red River flood in Grand Forks, ND, the Weather Service had predicted a crest height of 49 feet. The levees protecting the city were good up to 51 feet – no problem, right? Wrong. The predictors didn’t know how high the water would rise exactly, or even down to the inch; that’s not how rivers work. There was a range of reasonably likely crest heights, a range that went up to 59 feet. In fact, the river made it up to 54 feet, overtopping the levees and pouring into Grand Forks itself, where it caused massive damage.
This stuff is easy to get right for anyone with some experience in data analysis and a little college math. But most people don’t have experience in data analysis, and don’t want to deal with college math. This is Silver’s real genius as a writer. His book teaches almost as much mathematics as I could in a semester’s course, and he does it with nearly no resort to equations or abstract formalism.
So why, if the main lessons of “The Signal and the Noise” are well-understood by experts, are there so many lousy predictions in the world? Silver watched dozens of episodes of The McLaughlin Group and recorded almost a thousand predictions made by the political know-it-alls featured on the program. He found that each panelist’s predictions were about half right, half wrong. These are people who’ve spent their lives studying politics; why can’t they beat a coin flip? In answer, Silver makes an important point that is too seldom spoken aloud. People get the future wrong not just because getting the future right is hard, but because getting the future right isn’t their goal. The job of a political pundit is to say provocative things on television, not to be correct. If you make a crazy prediction (“Obama’s gonna dump Joe Biden and run with Hilary!”) and it doesn’t happen, the matter is instantly forgotten. But if you’re right, you can live off the reputational proceeds for years. Silver finds a similar phenomenon among TV weather forecasters, who quite cheerfully reveal that they report a 20% chance of rain when their best guess at the real probability is more like 5%. “People notice one type of mistake – the failure to predict rain – more than another kind, false alarms,” Silver observes. “If it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic, whereas an unexpectedly sunny day is taken as a serendipitous bonus.” Earthquake predictors, poker players, and economists – boy, does Silver do a number on economists – come in for the same kind of criticism.
You might think Silver, the data maven, would demand this problem be addressed by more rigorous adherence to cold, objective math. But it’s just the opposite. Silver finds throughout that statistical techniques work best when watched over by human eyes and guided by human minds. The hurricane trackers, who emerge as the heroes of this book, rely on their own intution to tweak the results the algorithms spit out: “they learn to work around potential flaws in the computer’s forecasting model, in the way that a skilled pool player can adjust to the dead spots on the table at his local bar.” Silver’s baseball prediction system, PECOTA, saw things the long-time scouts didn’t; but once scouts learned to combine the insights of sabermetrics with their decades of experience watching, testing, and talking to players, they were quickly able to beat Silver’s system. Prediction is a fundamentally human activity. Just as a novel is no less an expression of human feeling for being composed on a laptop, the forecasts Silver studies – at least the good ones – are expressions of human thought and belief, no matter how many theorems and algorithms forecasters bring to their aid. The math serves as a check on our human biases, and our human insight serves as a check on the computer’s bugs and blind spots. In Silver’s world, math can’t replace or supersede us. Quite the contrary: it’s math that allows us to become our wiser selves.
Jordan Ellenberg is a professor of mathematics at the University of Wisconsin - Madison. His book, “How Not to Be Wrong”, will be published in early 2014. He can be reached at email@example.com.