At Predictive Analytics World San Francisco this week I attended back to back sessions on econometrics, a word that doesn’t surface as often as I think it should. Bestselling books like Freakonomics: A Rogue Economist Explores the Hidden Side of Everything or Predictably Irrational: The Hidden Forces That Shape Our Decisions have brought economics into cocktail party conversations. But econometrics, the application of statistical and mathematical methods to understand the relationships between those key economic forces, is still waiting for its Hal Varian, the Chief Economist (!) at Google who famously said “…the sexy job in the next 10 years will be statisticians.”
In the first session, “Econometric Applications & Extracting Economic Insights from the LinkedIn Dataset,” Scott Nicholson said he got a PhD in economics because of his passion for understanding decision-making. He applies that passion to his work at LinkedIn, where one of the key questions they ask is why people return to the LinkedIn site. As an economist, he is interested in the causal mechanism – what actions people take that bring them back – but it is important to disentangle correlation from causation. While the gold standard would be A/B testing, for their data it would be costly to undertake. They use panel data econometrics instead, which works well when you only have observational data, which for them comes from observing individual site activity longitudinally. For example, what is the subsequent behavior after a user installs their mobile app? While not as strong as A/B testing this insight does move them closer to causality with respect to behavior and therefore has some predictive power.
Scott gave other examples of econometric analysis, but his key point was that searching for these kinds of econometric insights helps them better understand how site visitors make decisions. With this understanding they can tweak the context in which those decisions are made to get better results, both for their business and for their customers.
Moving from the microeconomic view, the next session was by Azhar Iqbal of Wells Fargo Securities, and his lens was decidedly macro: “Macroeconomic Forecasting, Consensus & Individual Forecaster: A Real-Time Approach.” His focus is on short-term forecasting, which he does on a monthly basis. They compare their models with the Bloomberg real-time consensus as a benchmark and usually outperform it. Their focus is on better forecasts of the macroeconomic variables, since this gives firms more opportunity to make money. The biggest market movers are when there is a significant difference between his forecast and the market. He talked about why it is important to have individual forecasters and why you can’t just find the best model and stick with it. If the last few years have taught us anything it is that the unexpected will happen! However, for those who are interested, he currently uses a Bayesian Vector Autoregressive Model (BVAR), also known as the Minnesota prior.
I had breakfast with Azhar the next morning to learn more, particularly since he had referenced in his presentation the power and flexibility he gets from SAS for building these models. Given the amount of money on the line if your model is right (or wrong!), it is a perpetually challenging job. He humbly emphasized that it takes an entire team to produce this kind of analysis. Everyone wants to know the meltdown of 2008 wasn’t better predicted, so naturally we spoke about that period. Whole books have been written on that question, so I won’t presume to tackle it here. But this guy survived that period, so he must be doing something right with his monthly forecasts!