Sunday, December 9, 2012

Philip Tetlock - Truth Is Somewhere In-between Nassim Taleb & Nate Silver

Currently, I am a participant (not active as much I want to be) in Philip Tetlock's Good Judgement Project - a forecasting tournament; so far it has been an enlightening, humbling and learning experience. 

His book Expert Political Judgment: How Good Is It? How Can We Know? is a must read and the full interview on Edge
Here.

The question becomes, is it possible to set up a system for learning from history that's not simply programmed to avoid the most recent mistake in a very simple, mechanistic fashion? Is it possible to set up a system for learning from history that actually learns in our sophisticated way that manages to bring down both false positive and false negatives to some degree? That's a big question mark.

Nobody has really systematically addressed that question until IARPA, the Intelligence Advanced Research Projects Activities, sponsored this particular project, which is very, very ambitious in scale. It's an attempt to address the question of whether you can push political forecasting closer to what philosophers might call an optimal forecasting frontier. That an optimal forecasting frontier is a frontier along which you just can't get any better. You can't get false positives down anymore without having more false negatives. You can't get false negatives down anymore without having more false positives. That's just the optimal state of prediction unless you subscribe to an extremely clocklike view of the political economic, technological universe. If you subscribe to that you might believe the optimal forecasting frontier is 1.0 and that godlike omniscience is possible. You never have to tolerate any false positives or false negatives.

There are very few people on the planet, I suspect, who believe that to be true of our world. But you don't have to go all the way to the cloudlike extreme and say that we are all just radically unpredictable. Most of us are somewhere in between clocklike and cloudlike, but we don't know for sure where we are in that distribution and IARPA is helping us to figure out where we are.

It's fascinating to me that there is a steady public appetite for books that highlight the feasibility of prediction like Nate Silver and there's a deep public appetite for books like Nassim Taleb's The Black Swan, which highlight the apparent unpredictability of our universe. The truth is somewhere in between and IARPA style tournaments are a method of figuring out roughly where we are in that conceptual space at the moment, with the caveat that things can always change suddenly.

I recall Daniel Kahneman having said on a number of occasions that when he's talking to people in large organizations, private or public sector, he challenges the seriousness of their commitment to improving judgment and choice. The challenge takes the following form–would you be willing to devote one percent of your annual budget to efforts to improve judgment and choice? And to the best of my knowledge, I don't think he's had any takers yet.

Interesting effect of algorithms on humans:

In our tournament, we've skimmed off the very best forecasters in the first year, the top two percent. We call them "super forecasters." They're working together in five teams of 12 each and they're doing very impressive work. We're experimentally manipulating their access to the algorithms as well. They get to see what the algorithms look like, as well as their own predictions. The question is–do they do better when they know what the algorithms are or do they do worse?




No comments: