(By David Brickell) So Obama won the US election in what - at least from an Electoral College perspective - increasingly looks like a landslide. The vote from Florida is not yet in, but assuming Obama wins that, which looks likely, he will have 332 Electoral College votes, compared with just 206 for Romney. This is despite endless pundits telling us that the race was just too close to call and, in recent weeks, that the "momentum" was with Mitt Romney. Gallup's daily national tracking poll put Romney ahead by five points before Hurricane Sandy, and a final national survey on November 5 gave the Republican a one-point advantage...
Who would have guessed it?
Well, actually, it turns out that there is one person that called it exactly, political blogger Nate Silver. Of the 50 States in the Election, guess how many states Nate Silver predicted the result of?
[Related -Tesla Exposes The Soft Journalistic Underbelly]
All 50! Every single one? Yep, every single one. And that's on top of him calling the correct results of 49 out of 50 States in the 2008 Election.
So who is this guy? Silver is an American statistician who runs the political blog FiveThirtyEight. After graduating from the University of Chicago in 2000, Silver worked as an economic consultant before creating a model to predict baseball player's future performance. He then sold this to stats firm Baseball Prospectus and turned to politics during the 2008 primaries.
To make predictions, he uses using a model that emphasized past polling history and demographics. Unlike traditional pollsters, who put questions to a field of voters, Silver aggregates amp; weights other polls based on factors like the past accuracy of the polling firm, the number of likely voters and the composition of the electorate in each State. He then runs multiple simulations of the results (known as Monte Carlo analysis), which results in a probability forecast. As of Sunday night, his view was that there was a 90.9% probability of an Obama win on Tuesday, with 332 EC Votes as the most probable outcome.
[Related -Markets Will Dictate Speed of Fiscal Cliff Resolution.]
And lo and behold! Quite understandably, sales of Silver's recent book, The Signal and the Noise, spiked on Wednesday to reach the No. 2 position on Amazon..
What does this mean?
This is a huge symbolic victory of what is called data science, over the traditional forecasting of experts and "gut feel". As John Sides, a political scientist at George Washington University (beware the expert designation!) has noted, President Obama's victory was a win for:
"the Moneyball approach to politics.. It shows us that we can use systematic data – economic data, polling data – to separate momentum from no-mentum, to dispense with the gaseous emanations of pundits' 'guts,' and ultimately to forecast the winner."
For those who haven't seen the Brad Pitt film, Moneyball is a system developed by Oakland Athletics general manager Billy Beane. The approach relied on statistical analysis - known as sabermetrics - to evaluate players and teams rather than the "gut feel" opinions of scouts and managers. Silver has essentially applied this thinking to political polling. Polling in politics is nothing new, but before Silver, pundits could simply pick and choose the numbers and polls they liked to confirm their already held opinions about the election. Silver's model is different, in that it incorporates all the polls and its analysis is based on historical precedent rather than partisan beliefs.
As VentureBeat have noted, the powerful thing about models like this is that it takes the risk of myopic/self-confirming human bias out of the equation. Silver never allowed himself to tweak his algorithm — the only thing that changed was the incoming data. With his approach, you set your parameters at the start, deciding how much weighting you're going to give to each based on historical accuracy, and then, you sit back and let the model do the work.
The success of Silver's relatively simple modelling work over the views of supposed experts mirrors a famous example in the field of medicine called the "Apgar score," invented by the anesthesiologist Virginia Apgar in 1953 to guide the treatment of newborn babies. The Apgar score is a simple formula based on five vital signs that can be measured quickly: Appearance, Pulse, Grimace, Activity, Respiration. It does far better than the average doctor in deciding whether the baby needs immediate help, and saves thousands of lives as a result.
What are the implication for investors?
Silver's results are having a ricochet effect through the political domain, severely embarrassing the so-called "experts". It's not hard to see the analogies with the stock market. The markets are replete with pundits, analysts and sooth-saying forecasters providing a running commentary of the impact of what's happening now on the future.
If you read the newspapers discussing company results, there's invariably a quote from some analyst from such and such investment bank pontificating on what these results will mean for the company's future performance. Bulletin boards heave with retail investors swapping or dissecting the supposedly "expert" comment of one analyst (which is almost invariably a quote supporting their own position - see this discussion on the dangers of confirmation bias). It's part of human nature that we have a persistent belief in the value of listening to these experts.
But, as we discuss here, the evidence that these experts have any predictive value whatsoever is scant to non-existent. David Dreman discusses the appalling track record of analyst forecasting in his book, Contrarian Investment Strategies. More recent work by James Montier found that:
In the US, the average 24-month forecast error is 93%, and the average 12-month forecast error is 47% over the period 2001-2006. The data for Europe are no less disconcerting. The average 24-month forecast error is 95%, and the average 12-month forecast error is 43%. To put it mildly, analysts don't have a clue about future earnings.
Since last December, we've been running our own quantitative models (based implicitly on some expert thinking but mostly on cold, hard ratios amp; statistics). We've been tracking their performance against the FTSE 100 and comparing the results against analyst picks. It's early days, of course, but, increasingly, our view is that most of the intellectual output of this vast engine called the City is just nonsense punditry. You might as well be reading tea leaves. There are exceptions, of course. The quant team from Societe Generale, former home of James Montier, clearly understands and leverages this kind of thinking, but for the most part, the views of most analysts are just not worth the paper they are written on.
Algorithms win the day!
James Montier's excellent piece - An Ode to Quant - discusses this issue in some detail and concludes that investing is unlikely to be different in terms of the superiority of statistics to intuition/judgement. He notes however some significant obstacles that exist for widespread acceptance of this view:
Firstly, the fear of technological unemployment. This is obviously an example of a self serving bias. If, say, 18 out of every 20 analysts and fund managers could be replaced by a computer, the results are unlikely to be welcomed by the industry at large. Secondly, the industry has a large dose of inertia contained within it. It is pretty inconceivable for a large fund management house to turn around and say they are scrapping most of the processes they had used for the last 20 years, in order to implement a quant model instead.
Now, of course, we shouldn't swap one set of flawed experts for another, i.e. statisticians. A model is only as good as its set of underlying inputs but this is surely an opportunity to reflect on the efficacy and value of checklists amp; algorithms in addressing some of the predictable flaws in human decision-making. Users of Stockopedia PRO will already know that these kinds of tools are a key part of our feature set.
Here's a nice graphic (via @cosentino on Twitter) that illustrates the power amp; efficacy of Silver's statistical forecasting approach.
Once we've acheived that kind of forecasting accuracy for the FTSE 100, we'll be sure to let you know! :-)