scorecardresearch Skip to main content

How math won big at the Oscars

Ben Zauzmer, a Harvard senior, developed an algorithm that he used to predict Oscar results. Suzanne Kreiter/Globe staff

At Sunday night’s Oscars, Lady Gaga proved the power of her voice, John Legend and Common proved the power of song, and, unwittingly, the Academy of Motion Picture Arts and Sciences proved the power of math.

That’s right, math.

Last week, the Boston Globe published my predictions for the 87th Academy Awards. As I do every year, I used only data and statistics – no personal opinions or hunches allowed – and last night math went on a great run.

“Birdman” or “Boyhood” for best picture? The math said “Birdman,” and it won. How about the director race that so many people said favored Richard Linklater? The math gave Alejandro G. Iñárritu a surprisingly comfortable lead, and sure enough he went on to claim the honor. And so it went, with my model correctly predicting contested races for actor in a leading role (Eddie Redmayne), original screenplay (“Birdman”), adapted screenplay (“The Imitation Game”), foreign language film (“Ida”), visual effects (“Interstellar”), makeup and hairstyling (“The Grand Budapest Hotel”), sound editing (“American Sniper”), and sound mixing (“Whiplash”), plus a slew of less competitive categories.

My model got 18 correct out of 21, good for an 86 percent mark. That includes going 8 for 8 in the top categories (picture, director, the four acting awards, and the two screenplay honors). I ended with my personal predictions for the three short film categories – there’s not enough data to predict those three mathematically – and with those included, my Oscar ballot went 21 for 24, or 88 percent.


How did it work? The basic idea is that in each category, I look for predictors that have a good track record of correlating closely with the Oscars. Then I use statistics to weight those predictors based on how well they have done at calling previous years’ Oscars.


So which ones went wrong? For starters, nothing was entirely off the mark. In all three of the categories the math missed, the film it had in second place (out of five nominees) ended up taking home the trophy. Furthermore, none of those three categories (film editing, original score, and animated feature) were mathematical blowouts: the gaps between first and second place were all less than 30 percent. And sometimes events with lower percentages occur; that’s why I provide percentages, not guarantees. What happened in all three of those categories was that multiple variables that are normally good Oscar predictors missed the mark, skewing the math.

Some of this year’s Oscar oddities:

-This was the first time ever in a year with more than five best picture nominees that all of them earned at least one Oscar.

-No film had ever before won best picture, director, original screenplay, and cinematography, but nothing else. Birdman did that.

-No previous film had taken its only Oscars in production design, costume design, makeup and hairstyling, and original score. “The Grand Budapest Hotel” did that.

-No previous film had won only supporting actor, film editing, and sound mixing. “Whiplash” did that.

The lesson? Even when the Oscars break from history in small ways, using math to analyze past results is an excellent way to predict future Academy Awards.

Ben Zauzmer can be reached @BensOscarMath.