After nailing 2012 elections, number crunchers suggest pollsters are asking the wrong question
President Obama wasn’t the only winner in November’s election: Math also triumphed. At the forefront of the algorithmic charge was numbers nerd Nate Silver, who correctly predicted the presidential winner in all 50 states on his New York Times blog FiveThirtyEight.
Silver incorporated several factors into his calculations, including whether a candidate was an incumbent and how much money a candidate received in campaign contributions. But a hefty portion of Silver’s predictive secret sauce was the careful aggregation and weighting of results from multiple polls. To the chagrin of many, he was dead-on, and election night was touted as a win for Silver as much as it was for the president. (As one fan tweeted: “Tonight, Nate Silver is the Chuck Norris of the Web. Or Chuck Norris is the Nate Silver of fighting.”)
Silver wasn’t the only math whiz prognosticating about this year’s political races. David Rothschild, an economist at Microsoft Research in New York City, predicted back in February on his blog The Signal that Obama would win the election. That’s right, February. (Florida, which Obama won by a margin of less than 1 percent, was the only state that The Signal didn’t call correctly). Rothschild’s recipe for seeing the future was also a mix of ingredients, including economic indicators, incumbency and Obama’s approval rating.
Clearly these multifactor mathematical approaches have merit. But recently Rothschild and his University of Michigan colleague Justin Wolfers have been making the case that predicting the political future may come down to one simple question.
It’s not the poll response typically reported over and over during an election cycle. Since the 1930s, pollsters have been developing their forecasts primarily by asking one thing: If the election were held today, who would you vote for? If the sample is both large and representative, this “intention” question can work. It’s also tidy, as Rothschild explained at this year’s New Horizons in Science meeting in Raleigh, N.C. The raw numbers alone — X percent for Obama, Y percent for Romney — tell the story.
But the intention question gets at who a person would vote for today, which can be misleading the farther you are from the election. Incumbents, for example, tend to look worse in polls taken around Labor Day than on Election Day, notes Rothschild.
If pollsters are going to rely on a single question, it should not be about voter intentions, Rothschild and Wolfers argue in a recent National Bureau of Economic Research working paper. Pollsters should ask about expectations: Who do you think will win the election?
While the classic intention question reveals one data point — who the person being questioned plans to vote for — the answer to the expectation question is far richer. In addition to indicating how the person being questioned will vote, that data point also incorporates information on how that person’s friends and family will vote. The answer may even encode how polled people and their social peers might be influenced by debates, political ads or their own inner fickleness.
Fortunately, some polling outfits have historically asked both the intention question and the expectation question, which allowed Rothschild and Wolfers to do some comparing. The economists looked at Electoral College results of presidential races from 1952 to 2008. In most cases, the answers to the intention and expectation questions were the same. But in the 77 cases in which the questions predicted different outcomes, the expectation question correctly picked the winner 78 percent of the time.
It turns out that even when you have a small, skewed sample, the expectation question can provide an accurate forecast, said Rothschild. In his analysis of recent election polling of Xbox users, who tend to be young and male, the expectation question outperformed the intention question and did a pretty good job of predicting the outcome of the 2012 presidential race.
In 1936, an upstart named George Gallup correctly predicted Franklin Delano Roosevelt’s win, contradicting the Literary Digest, which had correctly forecast the winner of each presidential election since 1920. The Digest’s downfall was blamed on poor sampling and a poor response rate — it mailed its poll to voters based on phone books and car registrations at a time when many Americans had neither a telephone nor a car. Gallup reportedly sent pollsters out on the street to interview people in person.
It’s too bad old George isn’t here today. The polling operation that bears his name was rated as least accurate and most biased of 23 polls assessed by Silver’s blog.
SN Prime | December 3, 2012 | Vol. 2, No. 45