There is a lot of angst around why research evidence takes so long to penetrate routine clinical practice. My view is that it’s a miracle that any of it makes it into practice at all because research derived information is very much the square peg to clinical practice’s round hole.
The gold standard of experimental clinical research is the randomised controlled trial (RCT), a study design created to make the research environment look nothing like the real world of the non-compliant, socially complicated, and comorbid. You probably know all this otherwise you wouldn’t be reading a blog posted on bmj.com, but in my experience few do. My favourite example of this blissful ignorance was in a meeting with someone from the Department of Health of England who insisted that the company I was representing had to do what they asked “because it was based on a randomised trial.” That was the full extent of her rationale. I couldn’t help thinking about where my taxes were going.
The limitations of trials are brilliantly laid out in a 2009 article by Shaun Treweek and Merrick Zwarenstein entitled, Making trials matter. They discuss the tension between ensuring the trial is methodologically robust (internally valid) and of value in the real world (applicability). “An internally valid trial,” they believe, “that has poor applicability… is a lost opportunity to influence clinical practice and healthcare delivery.”
Despite all this, being able to cite published research has a significant influence on clinicians. A friend of mine conducts market research for pharmaceutical companies. He’s stood on the anonymous side of a double-sided mirror and watched doctors leaf through marketing material. He says that without fail the presence of a citation to an RCT gets their attention and approval, and is the most likely thing to influence their behaviour.
There’s something crazy about a world in which we know research-derived information bears little resemblance to the real world and yet we continue to generate and cite it to influence behaviour.
There are some big hitters out there trying to get people to see the world as it is, including none other than the chair of NICE, Michael Rawlins. In his Harveian Oration of 2008 he said:
“Randomised controlled trials, long regarded as the ‘gold standard’ of evidence, have been put on an undeserved pedestal… Observational studies are also useful and, with care in the interpretation of the results, can provide an important source of evidence.” I understand he reiterated that point in a lecture to the Office of Health Economics.
The pharmaceutical industry has tried to show leadership in this arena by generating information more closely associated with real-life practice. They call it real world data. Although they still have to generate the usual kind of evidence to illustrate efficacy and safety they’ve recognised that real world data “increasingly plays an important role in ensuring that medicines are accepted by national policy makers and are adopted into practice.”
We often need small children (or Hans Christian Andersen) to point out what’s crazy in this world but luckily for the egos in healthcare we have Richard Bohmer. In his succinct perspective in the New England Journal of Medicine he describes the four habits of high-value healthcare organisations. The third and fourth are measurement and self study. In essence, they collect and analyse real world data—round pegs for round holes.
I was lucky enough to hear Bohmer give a presentation and when asked what one thing all organisations need to start doing tomorrow to improve care he said they should start collecting local performance data that local clinicians could identify with and believe in. His view was that research evidence, although important, emerges too slowly and is too abstract to keep up with the demands of real life practice. Rather than evidence based medicine, he said, we need to embrace evidence capturing medicine.
I’m with Treweek, Zwarenstein, Rawlins, and Bohmer; how about you?
Competing interests: I helped launch the journal in which Shaun Treweek and Merrick Zwarenstein’s article was published, although I had nothing to do with its peer review, acceptance, and publication. I have no other competing interests except that I provide consultancy to organisations to help them make better use of established knowledge, which can include helping them to generate their own real world data.
Pritpal S Tamber is the director of Optimising Clinical Knowledge Ltd, a consultancy that helps organisations improve how they use established clinical knowledge. He was previously the medical director of Map of Medicine Ltd, a company that creates clinical pathways to help health communities design services. He was the editorial director for medicine for BioMed Central Ltd and he was also the managing director of Medicine Reports Ltd. He has twice been an editor at the BMJ, the first time as the student editor of the Student BMJ.