Scientific Failure and Self Experimentation

In 2005, John P. A. Ioannidis wrote “Why Most Published Research Findings Are False” that was published in PloS journal. It is the most cited paper in that journal’s history and it has led to much discussion in the media. That paper was a theoretical model but has since been well supported — as Ioannidis explained in an interview with Julia Belluz:

“There are now tons of empirical studies on this. One field that probably attracted a lot of attention is preclinical research on drug targets, for example, research done in academic labs on cell cultures, trying to propose a mechanism of action for drugs that can be developed. There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators. Animal research has also attracted a lot of attention and has had a number of empirical evaluations, many of them showing that almost everything that gets published is claimed to be “significant”. Nevertheless, there are big problems in the designs of these studies, and there’s very little reproducibility of results. Most of these studies don’t pan out when you try to move forward to human experimentation.

“Even for randomized controlled trials [considered the gold standard of evidence in medicine and beyond] we have empirical evidence about their modest replication. We have data suggesting only about half of the trials registered [on public databases so people know they were done] are published in journals. Among those published, only about half of the outcomes the researchers set out to study are actually reported. Then half — or more — of the results that are published are interpreted inappropriately, with spin favoring preconceptions of sponsors’ agendas. If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.”

This is part of the replication crisis that has been known about for decades, although rarely acknowledged or taken seriously. And it is a crisis that isn’t limited to single studies —- Ioannidis wrote that, “Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted” (from a paper reported in the Pacific Standard). The crisis cuts across numerous fields, from economics and genetics to neuroscience and psychology. But to my mind, medical research stands out. Evidence-based medicine is only as good as the available evidence — it has been “hijacked to serve agendas different from what it originally aimed for,” as stated by Ioannidis. (A great book on this topic, by the way, is Richard Harris’ Rigor Mortis.) Studies done by or funded by drug companies, for example, are more likely to come to positive results for efficacy and negative results for side effects. And because the government has severely decreased public funding since the Reagan administration, so much of research is now linked to big pharma. From a Retraction Watch interview, Ioannidis says:

“Since clinical research that can generate useful clinical evidence has fallen off the radar screen of many/most public funders, it is largely left up to the industry to support it. The sales and marketing departments in most companies are more powerful than their R&D departments. Hence, the design, conduct, reporting, and dissemination of this clinical evidence becomes an advertisement tool. As for “basic” research, as I explain in the paper, the current system favors PIs who make a primary focus of their career how to absorb more money. Success in obtaining (more) funding in a fiercely competitive world is what counts the most. Given that much “basic” research is justifiably unpredictable in terms of its yield, we are encouraging aggressive gamblers. Unfortunately, it is not gambling for getting major, high-risk discoveries (which would have been nice), it is gambling for simply getting more money.”

I’ve become familiar with this collective failure through reading on diet and nutrition. Some of the key figures in that field, specifically Ancel Keys, were either intentionally fraudulent or really bad at science. Yet the basic paradigm of dietary recommendations that was instituted by Keys remains in place. The fact that Keys was so influential demonstrates the sad state of affairs. Ioannidis has also covered this area and come to similar dire conclusions. Along with Jonathan Schoenfeld, he considered the question “Is everything we eat associated with cancer?”

“After choosing fifty common ingredients out of a cookbook, they set out to find studies linking them to cancer rates – and found 216 studies on forty different ingredients. Of course, most of the studies disagreed with each other. Most ingredients had multiple studies claiming they increased and decreased the risk of getting cancer. Most of the statistical evidence was weak, and meta-analyses usually showed much smaller effects on cancer rates than the original studies.”
(Alex Reinhart, What have we wrought?)

That is a serious and rather personal issue, not an academic exercise. There is so much bad research out there or else confused and conflicting. It’s about impossible for the average person to wade through it all and come to a certain conclusion. Researchers and doctors are as mired in it as the rest of us. Doctors, in particular, are busy people and don’t typically read anything beyond short articles and literature reviews, and even those they likely only skim in spare moments. Besides, most doctors aren’t trained in research and statistics, anyhow. Even if they were better educated and informed, the science itself is in a far from optimal state and one can find all kinds of conclusions. Take the conflict between two prestigious British journals, the Lancet and the BMJ, the former arguing for statin use and the latter more circumspect. In the context of efficacy and side effects, the disagreement is over diverse issues and confounders of cholesterol, inflammation, artherosclerosis, heart disease, etc — all overlapping.

Recently, my dad went to his doctor who said that research in respectable journals strongly supported statin use. Sure, that is true. But the opposite is equally true, in that there are also respectable journals that don’t support wide use of statins. It depends on which journals one chooses to read. My dad’s doctor didn’t have the time to discuss the issue, as that is the nature of the US medical system. So, probably in not wanting to get caught up in fruitless debate, the doctor agreed to my dad stopping statins and seeing what happens. With failure among researchers to come to consensus, it leaves the patient to be a guinea pig in his own personal experiment. Because of the lack of good data, self-experimentation has become a central practice in diet and nutrition. There are so many opinions out there that, if one cares about one’s health, one is forced to try different approaches and find out what seems to work, even as this methodology is open to many pitfalls and hardy guarantees success. But the individual person dealing with a major health concern often has no other choice, at least not until the science improves.

This isn’t necessarily a reason for despair. At least, a public debate is now happening. Ioannidis, among others, sees the solution as not difficult (psychology, despite its own failings, might end up being key in improving research standards; and also organizations are being set up to promote better standards, including The Nutrition Science Initiative started by the science journalist Gary Taubes, someone often cited by those interested in alternative health views). We simply need to require greater transparency and accountability in the scientific process. That is to say science should be democratic. The failure of science is directly related to the failure seen in politics and economics, related to powerful forces of big money and other systemic biases. It is not so much a failure as it is a success toward ulterior motives. That needs to change.

* * *

Many scientific “truths” are, in fact, false
by Olivia Goldhill

Are most published research findings false?
by Erica Seigneur

The Decline Effect – Why Most Published Research Findings are False
by Paul Crichton

Beware those scientific studies—most are wrong, researcher warns
by Ivan Couronne

The Truthiness Of Scientific Research
by Judith Rich Harris

Is most published research really wrong?
by Geoffrey P Webb

Are Scientists Doing Too Much Research?
by Peter Bruce