Evidence based medicine (EBM) should form the foundation of effective clinical decision making; however, growing unrest—and an awful lot of criticism—suggests the evidence bit of EBM is increasingly part of the problem, and not the solution.
Concerns with quality and rigour in research are leading to a lack of trust in the production, publication, and utilisation of evidence. Des Spence, writing in The BMJ, thinks the situation is so bad that, “Evidence based medicine is broken,” and when an official from the US Food and Drug Administration (FDA) also reports, “The clinical trial system is broken and it’s getting worse,” you have to acknowledge that there might be problems with the evidence base.
What exactly are these problems? In the broadest context, they can be categorised into three main areas—although this doesn’t do justice to the extent of the issues—these are: distortion of the research agenda, very poor quality research, and a lack of transparency for published evidence.
Distortion of the research agenda, mainly by commercial decisions, is leading to an ever increasing evidence base that doesn’t meet the needs of patients. With too much focus on the well, and not enough on the sick, the paradoxical situation has arisen whereby medicine is potentially harming the healthy through earlier detection and increasingly looser definitions of disease.
As an example, if you find yourself constantly late, disorganised, forgetful, and overwhelmed by your responsibilities—which could refer to all of us—you might have adult attention-deficit disorder. You will be pleased to know that there are at least four different medications currently available for this condition. You could argue that the pharmaceutical industry is becoming equally good, if not better, at manufacturing diseases as opposed to drugs.
While research publications continue to increase—to the point where the notion of keeping up to date is nigh on impossible—the quality is often very poor, if not sometimes outright pitiful. Although there has been a growth in research to promote implementation, my observations—note the low level of my evidence —have been that while clinicians are extremely good at responding to robust evidence, all too often the quality of the evidence is weak and unworthy of implementation.
Poor quality evidence arises when observational data are used to establish treatment effects; when outcome measures are unimportant to patients or, even worse, are meaningless for patient care; and also when simple factors to account for bias are not incorporated into the research design.
John Ioannidis, in his highly cited publication “Why Most Published Research Findings Are False,” cites six helpful pointers—worth committing to memory—to determine if research findings are less likely to be true: studies which are small in size; studies with small effect sizes; hypothesis generating experiments; studies with greater flexibility in design, definitions, and outcomes; investigators with conflicts of interests; and lastly, the hotter a scientific field is the less likely it is to be true.
Finally, incomplete reporting, misrepresentation (better known as spin), publication bias, and falsification of data are all presenting huge problems. As examples, Retraction Watch recently reported that BioMed Central “has uncovered about 50 manuscripts in their editorial system that involved fake peer reviewers.” It seems that authors providing their own peer reviewers is now off the table. Further to this, half of all medical reporting is subject to spin, based on abstracts, which lead to “sexed up” press releases. What is more concerning—and led to the AllTrials campaign—is that up to half of all clinical trials are never published. Along with reporting bias, and difficulties with data access, the published research as it stands is a barrier to innovation.
An evidence based approach for clinical practice, hence, involves being aware of the evidence, its strengths and weakness, the substantial limitations, and the inferences we subsequently make to inform clinical decision making. Moreover, realising decisions are based not only on evidence is important to understand how to use it, despite all of its limitations, in practice. This point is made by Trisha Greenhalgh and colleagues, when they say, “research evidence may still be key to making the right decision—but it does not determine that decision,” and similarly by Gordon Guyatt, and Victor Montori, “the evidence alone never tells one what to do.”
Distillation of the solution includes first recognising that we have a problem. Clinicians require skills to interpret and evaluate the evidence. Errors in interpretation prevent effective implementation, and, in some situations, will be dangerous for patients.
If the solution isn’t to be EBM, then I look forward to the alternative suggestions and the jury’s decision at Evidence Live (13-14 April 2015). Abstract submission open for 2015 Evidence Live conference [http://evidencelive.org].
Carl Heneghan is professor of EBM at the University of Oxford, director of CEBM, and a GP.
His research interests span chronic diseases, diagnostics, use of new technologies, and investigative work with The BMJ on drugs and devices that you might stumble across in the media. He is also a founder of the alltrials.net campaign.
I declare that I have read and understood BMJ Policy on declaration of interests and I hereby declare the following interests:
Carl Heneghan jointly runs the Evidence Live conference with The BMJ and is a founder of the AllTrials campaign. He has received expenses and payments for his media work from Channel 4, the BBC, FreshOne TV productions, and the Guardian. He has received expenses from the World Health Organization (WHO) and the US FDA, and holds grant funding from the NIHR, the National School of Primary Care Research, the Wellcome Trust, and the WHO.