As a recently trained GP, I am well versed in the ways of evidence-based medicine. It is the bedrock upon which my medical training was based, and one of the key standards upon which my clinical practice is judged by my peers, and more importantly by my patients. Practising medicine based on the best available scientific evidence is both a professional obligation and best practice. However, it has not always been so, as the iterative art of medicine preceded the evidence based revolution.
By subjecting medicine to modern standards of scientific rigor, a firm evidence base for clinical practice has been established in less than a generation. Given its centrality to almost every decision made on the clinical frontline it is not surprising that “the evidence” is treated in many cases as the single most important component of any clinical interaction. The importance attached to the evidence base means that those at the cutting edge of medical research are rewarded in many different ways for their endeavours—academically, financially, and professionally. However, while good scientific evidence unquestionably enriches healthcare, bad evidence and misleading science can cause untold harm.
Sadly, there is growing recognition that the power of evidence-based medicine has been corrupted in some instances. We are now discovering that “bad evidence” is widespread, with bias permeating every stage of the process, from research funding to selective publication. In some cases evidence not supporting the desired or expected outcomes is either suppressed or ignored. The uncertainty associated with these revelations shakes everyday clinical practice and causes many clinicians and academics to question the validity of that which is treated in medical schools and postgraduate training systems as healthcare dogma.
As a general practitioner trained and immersed in quality improvement, I wonder whether there may be lessons we could learn from the cracks that are now apparent in healthcare’s evidence-based foundations?
The quality improvement movement continues to gather pace through a hands-on grassroots network. While this approach belongs to the frontline, there are those that believe it is imperative to establish quality improvement firmly on an academic and scientific footing so that it may truly become part of mainstream, conventional healthcare. In order to achieve this, publishing quality improvement work and establishing the statistical significance of improvement efforts at scale is essential.
However, as this type of improvement reporting gains popularity and repute, I worry that there could be choppy waters ahead. If the same incentives that have cast doubt on evidence based medicine, begin to apply to academic quality improvement work, then are we doing everything we can to pre-empt the problems that may lay ahead? What does bad improvement science look like? What could the impact of it be? And crucially, how do we guard against it, to avoid reputational damage and doubt about the effectiveness of quality improvement in the future?
As a starting point it is necessary to remind all improvement enthusiasts to maintain a critically vigilant eye when examining and evaluating improvement work. As with all scientific endeavours, behind every improvement initiative there is a story. Positive results and accomplishment, distilled into a few short lines, can make improvement work sound easy, and it is worryingly rare to hear anything about our improvement failures. In order to protect the potency, validity, and truthfulness of our fledgling science, we should always maintain space for the details of the story, be it a positive or negative one. Being honest about the limitations of our improvement efforts, combining measurement with statistically valid tools and being wary of over-claiming causal effects will all help to maximise learning and minimise bias.
When I coach and mentor aspiring quality improvement enthusiasts, it never ceases to amaze me just how complicated, arduous, and difficult improvement journeys can be. The improvement journey takes courage, skill, and a whole lot of heart. It happens in real world settings, without many of the checks and controls of conventional science and I hope that this “real-worldness” is a protective influence, rather than a corruptive one.
John Brennan is a full time GP in North Dublin and South County Kilkenny, Ireland, and was recently appointed Quality Improvement Faculty with the Royal College of Physicians of Ireland (RCPI).
Acknowledgements: Thank you to Cat Chatfield for discussing and commenting on this article.