Another Study Finds Social Scientists No Better At Forecasting Than Laypeople
Whether we should defer to ‘experts’ was a major theme of the pandemic.
Back in July, I wrote about a study that looked at ‘expert’ predictions and found them wanting.
The authors asked both social scientists and laymen to predict the size and direction of social change in the U.S. over a 6-month period.
Overall, the former group did no better than the latter – they were slightly more accurate in some domains, and slightly less accurate in others.
A new study (which hasn’t yet been peer-reviewed) carried out a similar exercise, and reached roughly the same conclusion.
Igor Grossman and colleagues invited social scientists to participate in two forecasting tournaments that would take place between May 2020 and April 2021 – the second six months after the first. Participants entered in teams, and were asked to forecast social change in 12 different domains.
All teams were given several years worth of historical data for each domain, which they could use to hone their forecasts. They were also given feedback at the six month mark (i.e., just prior to the second tournament).
The researchers judged teams’ predictions against two alternative benchmarks: the average forecasts from a sample of laymen; and the best-performing of three simple models (a historical average, a linear trend, and a random walk). Recall that another recent study found social scientist can’t predict better than simple models.
Grossman and colleagues’ main result is shown in the chart below. Each coloured symbol shows the average forecasting error for ‘experts’, laymen and simple models, respectively (the further to the right, greater the error and the less accurate the forecast).
Although the dark blue circles (representing the ‘experts’) were slightly further to the left than the orange triangles (representing the laymen) in most domains, the differences were small and not statistically significant – as indicated by the overlapping confidence intervals. What’s more, the light blue squares (representing the simple models) were even further to the left.
In other words: the ‘experts’ didn’t do significantly better than the laymen, and they did marginally worse than the simple models.
The researchers proceeded to analyse predictors of forecasting accuracy among the teams of social scientists. They found that teams whose forecasts were data-driven did better than those that relied purely on theory. Other predictors of accuracy included: having prior experience of forecasting tournaments, and utilising simple rather than complex models.
Why did the ‘experts’ fare so poorly? Grossman and colleagues give several possible reasons: lack of adequate incentives; social scientists are used to dealing with small effects that manifest under controlled condition; they’re used to dealing with individuals and groups, not whole societies; and most social scientists aren’t trained in predictive modelling.
Social scientists might be able to offer convincing-sounding explanations for what has happened. But it’s increasingly doubtful that they can predict what’s going to happen. Want to know where things are headed?
Rather than ask a social scientist, you might be better off averaging a load of guesses, or simply extrapolating from the past.
See more here dailysceptic.org
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
Trackback from your site.
Howdy
| #
It seems the more ‘expert’ one is, the more ‘dyed in the wool’ one is. Objectivity and openness is lost along the way.
I’ve attempted debate with quite a few people of long-standing in certain areas. They don’t like to lose, drawing on their years of knowledge as being unfailing in their thinking, yet even when presented with the facts, they will try to find another reason to evade being wrong.
Great quote in the image!
Reply