Superforecasting – The Art of Human Prediction

In this article we will look at the art of forecasting and human prediction, specifically we will focus on the book “Superforecasting”, where Tetlock and Gardner present a study of what makes human prediction more reliable and accurate.

 

 

The study in question was conducted in collaboration with the US intelligence community. It is comprised of around 3000 participants (you can check out the process at www.gjopen.com) who had to predict several future events that were posed to them by the intelligence community. Each participant had to assign a probability to the event and the project scored the participants with a hindsight score, according to the amount of successful predictions they made. If one assigned a probability of 80% to a future event A that actually materialised, one would get  a higher hindsight score compared to a colleague that predicted the same event with 60% probability.

This is based on a statistical measure called the “Brier score” (https://en.wikipedia.org/wiki/Brier_score). Actually, the ground truth in this case could be decided only with hindsight and therefore this experiment is more about “backcasting” than “forecasting” because the accumulated prior knowledge could not predict accurately what should be the “right” result, other than just averaging the total crowd’s predictions.

The experiment went on for four years (2011-2015), and more than 500 questions were posed to the participants. After the fact Tetlock – a professor at the Wharton school of business – and his team studied the “superforecasters”, the best performing humans to see what made them successful in predicting the events in question. One of the most interesting results was that subject-matter experts did not perform better than non-experts. Over all, the superforecasters were 30% more accurate than experts.

Telock’s research is essentially an expansion on the usual “crowd sourcing”, which represents a voting process of a big crowd that is then being summarized as a simple average. We wrote about crowdsourcing previously in our blog “The Epistemology of Crowd Sourcing”. Tetlock was trying to investigate and differentiate people in the crowd according to their ability to better predict the future. Thus, he did not base the predictions on a “democracy” of equal voices, but managed to create, with hindsight, a meritocracy of voices, where the superforecasters are given a stronger voice than others. Apparently, these “superforecasting” capabilities are not based on one’s expertise, but rather on some cognitive abilities that Tetlock specified based on long interviews he and his research team conducted with the superforecasters.

So what were the results?

In Tetlock’s words: “So why did one group do better than the other? It wasn’t whether they had PhDs or access to classified information. Nor was it what they thought – whether they were liberals or conservatives, optimists or pessimists. The critical factor was how they thought (Superforecasting, p. 68). Tetlock refers to Isaiah Berlin’s idea about the Fox and the Hedgehog: the fox knows many things and the hedgehog is focused on one thing.

  • The people who think like a fox managed to beat the hedgehogs. Actually the hedgehogs did slightly worse than random guessing!
  • Participants who were part of a group, thinking and debating together, were generally better than individual thinkers.
  • People who responded to new data and new arguments and were able to change their mind, performed better than people who stuck to their initial opinion, no matter what changes occurred in the world. “Superforecasters constantly look for other views they can synthesise into their own” (ibid, p. 123). They explain their view to their teammates and ask them to criticise it.
  • Furthermore, Superforecasters are people who embrace probabilistic thinking, i.e. they never tend to proclaim “this was meant to happen” statements. They realise reality is engulfed with uncertainty (ibid, p. 152). Thus, they are more prone to lengthy postmortem discussions with teammates, trying to learn how to improve based on their successes and failures (ibid, p. 186).

Tetlock’s summary of his study is a quote from David Feruccci, the founding father of IBM Watson: “What I want is that human expert paired with a computer to overcome the human cognitive limitations and biases” (Superforecasting, p. 23).

 

Professor Philip Tetlock

How does that apply to any other business trying to predict the future of its business and product?

As mentioned above, Tetlock’s research demonstrates that subject matter experts are not necessarily the best people to analyse what the future development of the business is. The findings suggest that a more inclusive process serves much better predictions and decisions. Each business should look for its own superforecasters, internally and as many as possible externally. After all, the most important decisions in the business world are based on predicting the future changes in the markets, both in terms of customers’ needs and in terms of competitors’ decisions.

A mid-size startup is always required to better understand what is about to change in the target market: will customers move into cloud services? Will they require more on the data analytics side of the product? What kind of pricing will emerge out of the competition? All of these questions do not have a definite answer and should be perceived as probabilistic predictions that drive decisions on product features prioritisation and go-to-market strategies.

We at Ment.io allow businesses to do exactly that and more. Ment.io is designed to allow a streamlined and inclusive discussion of future business problems and decisions to be made. Ment.io’s proprietary algorithms analyse the differing views and score their probability, not only with hindsight but also with foresight. Ment.io creates an environment of Meritocracy rather than the usual communication tools that are Democratic in nature. The scoring mechanism is Bayesian and assigns levels of expertise to each of the participants. Tetlock’s hindsight Brier scoring can be used as a feedback mechanism to improve on the weights assigned to certain groups and individuals amongst the participants. Thus, Ment.io serves as a sophisticated “People’s analytics” tool that can point to the best predictions within a company, hence accelerating the collective thinking and increasing the credibility of the decisions taken. Ment.io will certainly improve your decisions and diminish your business mistakes. It will save you a lot of time and many unintended mistakes. Start your next discussion on Ment.io and see how you can better your ability to quickly solve your main business problems.

Share

Share on linkedin
Share on facebook
Share on twitter

Related Articles

Business

Meet-less weeks

If you ever attend workshops, you will hear it discussed regularly. They will probably make you open your calendar, shrink your meetings and even teach you to make faster and more efficient meetings

This website uses cookies

By visiting and interacting with Epistema Ltd.’s website, I hereby agree to the use of cookies as is stipulated in Epistema’s Cookie Policy and reaffirmed in its Privacy Policy.