Most medical research is flawed, says leading medical editor

Share

shutterstock_152205896Twenty years ago last week the statistician Doug Altman published an editorial in the BMJ arguing that much medical research was of poor quality and misleading. In his editorial entitled, “The Scandal of Poor Medical Research,” Altman wrote that much research was “seriously flawed through the use of inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation.” Twenty years later I fear that things are not better but worse.

Most editorials like most of everything, including people, disappear into obscurity very fast, but Altman’s editorial is one that has lasted. I was the editor of the BMJ when we published the editorial, and I have cited Altman’s editorial many times, including recently. The editorial was published in the dawn of evidence based medicine as an increasing number of people realised how much of medical practice lacked evidence of effectiveness and how much research was poor. Altman’s editorial with its concise argument and blunt, provocative title crystallised the scandal.

Why, asked Altman, is so much research poor? Because “researchers feel compelled for career reasons to carry out research that they are ill equipped to perform, and nobody stops them.” In other words, too much medical research was conducted by amateurs who were required to do some research in order to progress in their medical careers.

Ethics committees, who had to approve research, were ill equipped to detect scientific flaws, and the flaws were eventually detected by statisticians, like Altman, working as firefighters. Quality assurance should be built in at the beginning of research not the end, particularly as many journals lacked statistical skills and simply went ahead and published misleading research.

“The poor quality of much medical research is widely acknowledged,”  wrote Altman, “yet disturbingly the leaders of the medical profession seem only minimally concerned about the problem and make no apparent efforts to find a solution.”

Altman’s conclusion was: “We need less research, better research, and research done for the right reasons. Abandoning using the number of publications as a measure of ability would be a start.”

Sadly, the BMJ could publish this editorial almost unchanged again this week. Small changes might be that ethics committees are now better equipped to detect scientific weakness and more journals employ statisticians. These quality assurance methods don’t, however, seem to be working as much of what is published continues to be misleading and of low quality. Indeed, we now understand that the problem doesn’t arise from amateurs dabbling in research but rather from career researchers.

The Lancet has this month published an important collection of articles on waste in medical research. The collection has grown from an article by Iain Chalmers and Paul Glasziou in which they argued that 85% of expenditure on medical research ($240 billion in 2010) is wasted. In a very powerful talk at last year’s peer review congress John Ioannidis showed that almost none of thousands of research reports linking foods to conditions are correct and how around only 1% of thousands of studies linking genes with diseases are reporting linkages that are real. His famous paper “Why most published research findings are false” continues to be the most cited paper of PLoS Medicine.

Ioannidis’s conclusion as to why so much research is poor is similar to that of Altman’s:“Most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” Researchers are publishing studies that are too small, conducted over too short a time, and too full of bias in order to get promoted and secure future funding. An editorial in the Lancet collection on waste in research quotes 2013 Nobel Laureate Peter Higgs describing how he was an embarrassment to his Edinburgh University department because he published so little. “Today,” he said, “I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Producing lots of flawed research trumps a few studies that change our understanding of the world, as Higgs’s paper did.

Chalmers, Glasziou, and others identify five steps that lead to 85 percent of biomedical research being wasted. Firstly, much research fails to address questions that matter. For example, new drugs are tested against placebo rather than against usual treatments. Or the question may already have been answered, but the researchers haven’t undertaken a systematic review that would have told them the research was not needed. Or the research may use outcomes, perhaps surrogate measures, that are not useful.

Secondly, the methods of the studies may be inadequate. Many studies are too small, and more than half fail to deal adequately with bias. Studies are not replicated, and when people have tried to replicate studies they find that most do not have reproducible results.

Thirdly, research is not efficiently regulated and managed. Quality assurance systems fail to pick up the flaws in the research proposals. Or the bureaucracy involved in having research funded and approved may encourage researchers to conduct studies that are too small or too short term.

Fourthly, the research that is completed is not made fully accessible. Half of studies are never published at all, and there is a bias in what is published, meaning that treatments may seem to be more effective and safer than they actually are. Then not all outcome measures are reported, again with a bias towards those are positive.

Fifthly, published reports of research are often biased and unusable. In trials about a third of interventions are inadequately described meaning they cannot be implemented. Half of study outcomes are not reported.

The articles in the Lancet collection concentrate constructively on how wastage in research might be reduced and the quality and dissemination of research improved. But it wouldn’t be unfair simply to repeat Altman’s statement of 20 years ago that: “The poor quality of much medical research is widely acknowledged, yet disturbingly the leaders of the medical profession seem only minimally concerned about the problem and make no apparent efforts to find a solution.”

I reflect on all this in a very personal way. I wasn’t shocked when we published Altman’s editorial because I’d begun to understand about five years’ before that much research was poor. Like Altman I thought that that was mainly because too much medical research was conducted by amateurs. It took me a while to understand that the reasons were deeper. In January 1994 at age 41, when we published Altman’s editorial, I had confidence that things would improve. In 2002 I spent eight marvellous weeks in a 15th century palazzo in Venice writing a book on medical journals, the major outlets for medical research, and reached the dismal conclusion that things were badly wrong with journals and the research they published. I wondered after the book was published if I’d struck too sour a note, but now I think it could have been sourer.

My confidence that “things can only get better” has largely drained away, but I’m not a miserable old man. Rather I’ve come to enjoy observing and cataloguing human imperfections, which is why I read novels and history rather than medical journals.

Richard Smith was the editor of the BMJ until 2004 and is director of the United Health Group’s chronic disease initiative.

This article first appeared on MercatorNet.com and is published under a Creative Commons licence.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.