Why we can’t trust academic journals
Hundreds of thousands of scientists took to streets around the world in April. “We need science because science tells the truth. We are those who can fight the fake news,” a friend who participated in one of the March for Science rallies told me. I really wish this were true. Sadly, much evidence suggests otherwise.
The idea that the same experiment will always produce the same result, no matter who performs it, is one of the cornerstones of science’s claim to truth. However, more than 70{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of the researchers (pdf), who took part in a recent study published in Nature have tried and failed to replicate another scientist’s experiment. Another study found that at least 50{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of life science research cannot be replicated. The same holds for 51{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of economics papers (pdf).
The findings of these studies resonate with the gut feeling of many in contemporary academia – that a lot of published research findings may be false. Just like any other information source, academic journals may contain fake news.
Some of those who participate in the March For Science movement idealise science. Yet science is in a major crisis. And we need to talk about this instead of claiming that scientists are armed with the truth.
Ninety-seven per cent of March For Science participants (pdf) want policymakers to consider scholarly evidence when drafting their policies. I, too, used to think that influencing policymakers was part of a modern academic’s job description. Now I am less confident in the validity of many research findings.
There are multiple reasons for the replication crisis in academia – from accidental statistical mistakes to sloppy peer review. However, many scholars agree (pdf) that the main reason for the spread of fake news in scientific journals is the tremendous pressure in the academic system to publish in high-impact journals.
These high-impact journals demand novel and surprising results. Unsuccessful replications are generally considered dull, even though they make important contributions to scientific understanding. Indeed, 44{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of scientists (pdf) who carried out an unsuccessful replication are unable to publish it.
I have personal experience of this: my unsuccessful replication of a highly cited study has just been rejected by a high-impact journal. This is problematic for my career since my contract as an assistant professor details exactly how many papers I need to publish per year and what kind of journals to target. If I meet these performance indicators, my career advances. If I fail to meet them, my contract will be terminated 19 months from now.
This up-or-out policy encourages scientific misconduct. Fourteen per cent of scientists (pdf) claim to know a scientist who has fabricated entire datasets, and 72{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} say they know one who has indulged in other questionable research practices such as dropping selected data points to sharpen their results.
Read more at The Guardian
Trackback from your site.