Predictive modeling competitions: the competitive dimension of predictive analytics

0

After sporting events or major elections like the recent U.S. mid-term Senate elections, I tend to look back at how various predictions performed prior to these events, to find out who got it right. My interest in this was spawned after reading Nate Silver’s book The Signal and the Noise, and starting to follow his blog FiveThirtyEight. In Europe there is not as much of an established industry around organizing predictions for various topics, either using judgmental or predictive modeling approaches, as there is in the US. See for example, the election projection website.

sb10066863k

In fact, one of the aspects of predictive modeling that has fascinated me throughout my professional career is that it’s somewhat easy to make fair comparisons between various alternative modeling approaches - at least when compared to other data mining techniques, where the quality of a given solution will also depend on soft aspects, such as interpretability (e.g. for clustering results) or “interestingness” (e.g., for association rules).

Comparing the results of election or sports predictions is done mostly in fun, and for news value. But today, some organizations are relying on our competitive natures to solve interesting and worthwhile problems through analytics competition.

Admittedly, you first have to agree on a set of proper statistical accuracy criteria to measure the predictive performance and a proper holdout sample to apply those criteria, and I’m not saying that this is an easy decision given the fact that analytics competitions are still a subject of ongoing academic research. Yet, once you’ve settled these two aspects, you’re ready to compare various algorithms in a contest situation and pick the winner or produce an ensemble model (where you combine different model’s predicted outcomes using a weighted or unweighted average).

A lot of software packages have this type of comparison functionality built into their design. Take for example, SAS® Enterprise Miner where you can use the data partition and model comparison nodes in conjunction with different model nodes to produce such a contest. In the world of time series predictions, SAS® Forecast Server offers something similar with the holdout sample and the concepts of automatic diagnostics and the model selection lists.

Taking this one step further is where external competitions come into play. These competitions bring together two parties and produce a win-win situation. One the one hand you have the “commissioning party” that wants a particular business problem to be solved. Its interests might range from gaining scientific insights into the behavior of particular algorithms to commercial exploitation of the results and the winning algorithm. On the other hand, you have the “contracting parties” that want to participate and bring their specific modeling approach to the contest. It’s not uncommon to have multiple people work together in teams, bringing together expertise from aspects as diverse as academic research, commercial software vendors, or business consulting.

Some interesting competitions to look at

If you’re thinking about putting a new algorithm or analytic approach to a test, below is a summary of some major analytic competitions I’m aware of.

The first one that comes to my mind is the KDD Cup, an annual competition organized by the SIGKDD, the Special Interest Group on Knowledge Discovery and Data Mining, an organization of data mining professionals. Challenges presented to the data mining audience cover a wide spectrum of topics, ranging from customer relationship management to scientific research (biomedical, physics). Data and results from past competitions can also be accessed from that web site. Many times, participating teams are formed as a mix of commercial software vendors, consultants, and academic researchers.

The also annually held Data-Mining-Cup, hosted by German software vendor prudsys, provides another example of bridging the gap between academic research and business requirements of the industry. Each year, a particular data mining challenge (with the focus on practical relevance) is presented, and student teams from national and international universities are invited to compete to solve the problem. Examples of recent topics are around recommendation engines, online shop pricing, and voucher targeting at online shops. Details on past competitions and downloadable data can be found in the Review section of the website.

The Netflix Prize is an example for a competition that was organized by a company for commercial exploitation of results. Netflix is a US based provider of online DVD-rental and on-demand streaming services, mainly for movies and TV series. Netflix awarded a prize of US$1 million to the team or person that could best improve their existing algorithm for making predictions about what movies a user is going to like, based on a set of training data from users with known movie ratings. Technically, this is known as collaborative filtering, a sub-discipline of recommender systems. The competition was launched in 2006 and continued until 2009. The competition stopped in 2010 because of a lawsuit about privacy violations and privacy concerns raised by the Federal Trade Commission in the U.S. Although no longer available, it’s still a useful example as it spawned a lot of research in the area of recommendation engines.

The M competition, initiated by forecasting researcher Spyros Makridakis, has a rather special focus on time series predictions. The goal here is to evaluate the performance of various forecasting algorithms to predict multiple economic and social-demographic time series. The original competition was held in 1982, but 1993 and 2000 saw two follow-up competitions named M2 and M3, respectively.  Data sets for the three competitions are still available and can be found at the International Institute of Forecasters website. Note that information on another follow-up competition M4 is available as well.

The predictive modeling platform Kaggle takes the concepts of competitions even further. Here you can register, host your own competition and have a large community of international data scientists have a look at it. Challenges are not only taken from the area of scientific research and universities, but also from commercial organizations. For each of the registered competitions you will find detailed information on prizes, participants, leaderboards etc.

The competitions described above are the ones that are most well-known. I’m sure there are others as well, maybe hosted by universities and with a specific focus.  A good overview of available competitions can also be found at KDNuggets.

Final take

Personally, I think predictive modeling competitions are one of the most useful realizations of the crowd sourcing idea.

If you’re a researcher and you developed a new algorithm or analytic approach to a particular problem, they provide you a nice testing platform. You will get honest feedback on where you stand vis-à-vis others and might gain insights on where to improve with your algorithmic development.

As a commercial organization, competitions can provide a convenient way to “outsource” the efforts of optimizing your predictive modeling framework, especially using predictive modeling platforms like Kaggle. Let the international community of experts do the grunt work for you. You just need to answer three main questions before you start:

  • What prize are you willing to award to incentivize the competition participants?
  • What data are you capable and allowed to provide? How do you handle data privacy concerns? This was an important lesson learned by Netflix!
  • How do you deal with legal issues such as ownership of the intellectual property that went into the algorithm development and permissions to use it for commercial exploitation?

So, I hope predictive modeling competitions are here to stay. They sure help to advance the development of new algorithms as an ongoing exercise.

Curious about making predictions about the next U.S. presidential election in 2016? You can already start making forecasts here.

Share

About Author

Stefan Ahrens

Sr Solutions Architect

Stefan Ahrens hat an der Westfälischen Wilhelms-Universität Münster Volkswirtschaftslehre mit den Schwerpunkten Statistik und Ökonometrie studiert und ist seit November 2003 als Solution Architect im Competence Center Analytics bei SAS Institute Deutschland tätig. Seine Tätigkeitsschwerpunkte liegen aktuell bei den Themen Statistische Datenanalyse, Data Mining, Forecasting und Betrugserkennung für verschiedene Branchen. Vor seiner Tätigkeit bei SAS Institute war bei StatSoft, einem Hersteller für Statistik-Software, und bei Research International, einem Marktforschungsunternehmen, jeweils als Statistiker und analytischer Berater tätig.

Comments are closed.

Back to Top