Is Most Published Research Wrong? Yep. Here’s Why

One of my students—my heart soars like a hawk!—sent me the video linked below. I’ll assume you’ve watched it, as my comments make reference to it.

About the dismal effects of p-values, the video already does a bust-up job. Plus, we’re sick of talking about them, so we won’t. I’ll only mention that the same kinds of mistakes are made using Bayes factors; only BFs aren’t used by most researchers, so few see them.

Some of the solutions recommended reduce (nobody can eliminate) the great flood of false positive results, like pre-registration of trials, are fantastic and should be implemented. Pre-registration only works for designed trials, though, so it can’t be used (or trusted) for ad hoc studies, of which there are many.

Reducing the drive for publication won’t happen, for university denizens must publish or they do, they really do, perish. Publishing negative results is also wise, but it will never be alluring. You won’t get your name in the paper for saying, “Nothing to see here. Move along.”

A partial—not the; there is no the—solution is one I propose in Uncertainty: The Soul of Modeling, Probability & Statistics, is to treat probability models in the same way engineers treat their models.

Engineer proposes a new kind of support to hold up a bridge. He has a theory, or model, that says, “Do X and Y will happen”, which is to say, “Build the bridge support according to this theory (X), and the bridge won’t fall (Y)”.

How can we tell if the theory, i.e. the support, works? There is only one true test: build the bridge and see if it stands or falls.

Oh, sure, we can and should look at the theory X and see how it comports with other known facts about bridges, physics, and land forms, all the standard stuff that one normally examines when interested in Y, standing bridges. There may be some standard knowledge, Z, which proves, in the most rigorous and logical sense of the word “proves”, that, given X, Y cannot be.

But if we don’t have this Z, this refutatory knowledge, then the only way we can tell If X, Then Y, is to X and see if Y. In other words, absent some valid sound disproving argument, the only way we can tell if the bridge will stand is try it. This is even true if we have some Z which says it’s not impossible but unlikely Y will stand given X the theory.

Have it, yet? I mean, have you figured the way it works for statistics yet?

Idea is this: abandon the new ways of doing things—hypothesis testing, parameter-based statements (which are equivalent here to saying only things about X and not Y given X)—return to the old way of building the models Pr( Y | X) and then testing these models on the real world.

Out with the new and in with the old!

Examples? Absolutely.

As per the video, Y = “Lose weight” and X = “Eating chocolate”. The claim many seem to have been made by many was that

Pr (Y = “Lose weight” | X = “Eating chocolate”) = high.

But that’s not what the old ways were saying; they only seemed to be saying that. Instead, the old ways said things about p-values or parameter estimates or other things which have no bearing to the probability Pr(Y|X), which is all we care about. It’s all somebody wanting to know whether they should each chocolate in order to lose weight cares about, anyway.

Assuming you’re the researcher who thinks eating chocolate makes one lose weight. Perform some experiment, collect data, form some theory or model, and release into the wild Pr (Y | X), where the X will have all the supposeds and conditions necessary to realize the model, such as (I’m making this up) “amount of chocolate must be this-many grams”. And that’s it.

Then the whole world can take the predictions made by Pr(Y|X) and check them against reality.

That model may be useful to some people, and it may be useless to others. Scarcely any model is one-size fits all, but the new ways of doing things (hypothesis tests, parameters) made this one-size-fits-all assumption. After all, when you “reject a null” you are saying the “alternate” is universally true. But we don’t care about that. We only care about making useful predictions, and usefulness is not universal.

The beauty of Uncertainty’s approach is simplicity and practicality. About practicality we’ve spoken about. Simplicity? A model-builder, or theoretician, or researcher, doesn’t have to say anything about anything, doesn’t have to reveal his innermost secrets, he can even hide his data, but he does have to say Pr(Y|X), which anybody can check for themselves. Just make the world X (eat so much chocolate) and see if Y. And anybody can do that.

Read more at wmbriggs.com

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via