Simple methods and ensemble forecasting of elections


Two enduring principles of forecasting are that simple methods can work as well as fancy methods, and that combining (averaging)  forecasts, also known as "ensemble forecasting," will usually result in more accurate predictions than the individual methods being averaged. We saw a good demonstration of these principles in Tuesday's election forecasts by Nate Silver on his FiveThirtyEight blog, and But let me digress...

Six Methods of Election Forecasting

 There are at least six kinds of methods used in election forecasting:

  • Nonsense: Basing the forecast on an observed historical correlation between the election outcome and a causally irrelevant variable. For example, the "Redskins rule," which asserted that when the Washington Redskins football team wins their last home game prior to the election, the party that holds the White House wins the election. When this rule failed for the first time in 2004, it was amended to assert that when the Redskins win, the party that won the popular vote in the previous election wins the election. (Recall Bush v. Gore in 2000.) Result: On November 4 the Redskins lost their home game, thus foretelling a Romney win.


  • Punditry: One step beyond nonsense (but just a baby step), are the forecasts of the once employed politicians (Newt Gingrich I, Newt Gingrich II), once relevant consultants (Dick Morris), once funny comedians (Jim Cramer), and current intellectual leaders (Rush Limbaugh). Such forecasts are based on the nebulous concepts of "experience" and "gut feel."  If you have a lot of money, there is no shortage of Washington, DC operatives willing to sell you their opinions, and part you from your political contributions. A laudable attribute of the pundits is that they don't let data and scientific evidence get in the way of their viewpoints. (See Karl Rove vs. the quants at the Fox News Decision Desk.)


  • Econometric Models: The University of Colorado model stresses state-level economic data, including unemployment and changes in per capita income. This approach didn't do so well. It forecast a Romney win with 330 electoral votes, and correctly called just 3 of 13 battleground states (with Florida still to be determined). Yale economist Ray Fair's model is interesting in that it is claimed to have correctly predicted 21 of 24 presidential elections from 1916 through 2008. What should give one pause, however, is that Ray Fair wasn't born until 1942, so how did his model "predict" those elections that occurred before the model existed? Even if he were a child forecasting prodigy and perfected his model at age 2 in time for the 1944 elections, that would be 7 fewer election predictions to brag about. In fact, the model was first used only in 1980, and miscalled 1992, 2000, and now 2012 (Obama 49%), making it correct in just 6 of 9 elections, or not much better than tossing a fair coin. (Note: To be fair, Fair has stated that the 2012 prediction is within the margin of error, so too close to call. But this renders the model both uninteresting and irrelevant.)


  • Prediction Markets: Relying on the "wisdom of crowds," the Iowa Electronic Market and Intrade are the two best known examples. On Monday Intrade priced Obama's chances at 72.4%, and IEM at about 75.7% (average price for the day in the winner-take-all market). Of course, just like a meteorologist predicting rain, if you don't forecast something as either 0% or 100%, one instance is not going to prove you wrong. IEM's vote share market averaged 50.9% for Obama on Monday, so that was pretty close.


  • Combination Models: is an unweighted average of forecasts from five sources: polls, the IEM vote share prediction market, econometric models, expert surveys, and indexes based on voter perception and candidate biographies. In just its 3rd presidential election, PollyVote has always come within 0.5% of the two-party vote percentages, and was about 0.2% off this year (giving Obama 51.0% of the two-party vote).


  • Polling: The definitive source for election news is, of course, The Colbert Report, where New York Times blogger Nate Silver explained his prediction methodology: “Go and look at the polls and take an average and add up the states and see who has 270 electoral votes. It’s not really that complicated, but people treat it like it’s Galileo or something.” Just as I predicted on Tuesday, someone would pretty much nail the results and become famous, and Nate Silver is the one. 

 A Win for the Quants

While the PollyVote and Nate Silver results were certainly a win for the principles of forecasting, remember one more important principle:

Don't jump to too many conclusions based on just one data point!

Sometimes good (or bad) results are just due to chance.


About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top