By Dr. Geoffrey Modest
The journal Evidence-Based Medicine (that’s the one that posts my blogs, part of BMJ) just came out with an interesting article challenging the biases inherent in evidence based medicine (EBM) which ultimately can distort the conclusions (see Seshia SS, et al. Evid Based Med 2016; 21: 41). They reference a 2014 BMJ analysis of EBM, noting its pluses and minuses (see Greenhalgh, T. BMJ 2014;348:g3725). The pluses are that EBM has been around for 20 years, has led to the development of more evidence-based reviews such as Cochrane Collaborations, as well as a slew of guidelines more based on specific methodological scientific criteria, and in many ways has elevated the basis for conducting more rigorous studies. But there are several minuses which are important to understand in order to interpret the results. Per the 2014 article (with some of my comments embedded):
- Distortion of the evidence based brand: by this they mean that drug and medical companies have played such a pivotal role in designing research studies that they are able to push using surrogate markers as the important outcome (e.g. A1C in diabetics), define inclusion and exclusion criteria to best show efficacy (though these may really undercut the applicability of the results to regular old patients), and selectively publish positive studies
- Too much evidence/too many guidelines: citing a study from 2005 (when there were many fewer guidelines than now), in a 24 hour period they admitted 18 patients with 44 diagnoses; to read the national guidelines on these diagnoses included 3679 pages and an estimated reading-time of 122 hours. And, I would add further that these guidelines may well be inconsistent with each other (e.g. different blood pressure goals in the American Diabetes Assn vs JNC8 guidelines).
- Marginal gains: most of the major therapies have been found (low-hanging fruits), e.g. HIV drugs, H Pylori treatment, statins. Newer trials are often overpowered, allowing them to find statistically significant findings which are not very clinically significant. And, I would add: these studies are often pretty short or stopped early, showing small absolute benefit but too short to pick up longer term harms of therapy
- Overemphasis on following algorithmic approaches: by overemphasizing specific targets (e.g. A1C in diabetics), clinicians may not pay enough attention to the really important patient issues (the depression, domestic violence, important social or other medical issuesin the patients’ lives). And incentivizing these mechanical issues (ordering A1C’s) or dealing with pop-ups or care prompts in electronic medical records, may undercut our ability or time spent to really help the main problems of patients
- Poor fit in those with multimorbidity: many of these EBM studies were done in patients with predominantly one condition (e.g., by excluding those with renal failure, cancer, etc.). Taking care of patients with multiple ongoing diseases leads to several issues not addressed in the studies: e.g., drug interactions, or polypharmacy (especially an issue as our patient populations are getting older and getting more chronic diseases)
- The 2016 EBM journal article expands this and develops more of a framework to understand the cognitive biases in the medical literature, noting that there may well be combinations of biases in any article. They group biases as follows:
- Conflicts of interest:
- Financial, nonfinancial (e.g. desire for promotion, prestige), and intellectual (driven by strong personal belief that could distort the study)
- Individual or group cognitive biases:
- Self-serving bias (affected by group/organizational motives), confirmation bias (favoring evidence that supports one’s preconceptions), in-group conformity (increased confidence in a decision if in agreement with others, similar to groupthink, where opposing views are discouraged), reductionism (reducing complex or uncertain scenarios into simple ideas and concepts; see further comments below), automation bias (uncritical use of statistical software, decision support systems)
- Group or organizational cognitive biases: scientific inbreeding (being trained in the same school of thought or by the same experts), herd effect (unquestioned acceptance of experts; reinforced by social media)
- Fallacies/logical errors in reasoning: planning fallacy (incorrectly estimating benefits or costs/consequences), sunk cost fallacy (inability to change course of study despite problems, after so much has been invested)
- Ethical violations: ranging from subtle statistical manipulations, selective publication, outright fraud/fabricatrion. There is typically an associated rationalization and self-deception.
- Conflicts of interest:
So, a few issues:
- These articles do bring up many of the concerns about EBM, despite the rather large positive of its push to make both the literature and its interpretation more rigorous. Most of the negatives are about inherent biases in designing and conducting studies but also in about being able to apply the results to the individual patient in front of you.
- One additional point is that, as the rising tide lifting all the boats, EBM-based guidelines also elevate “expert opinion”. By this I mean that since we do not have rigorous studies looking at most of the things we do in primary care (or, clinical medicine, for that matter), the guidelines have a lot of expert opinion. It is certainly true that there is a very clear and repeatedly articulated grading system in the reviews/guidelines reflecting the quality of the studies, but often the take-home message is muddled, combining more definitive and not-so-definitive conclusions all together (i.e., many of the subtleties are lost. We remember the target points highlighted in their conclusions or a take-away-message box, which are typically of highly varying quality). And, to make matters worse, a large % of the “experts” are under the drug/medical supply company wings, much more so in the past 20 years of EBM, so there is increased concern about their “expertise”.
- One interesting sideline here is the general approach of medical studies vs anthropologic studies (this comes from a long-lost article I read in the 1980s), which noted that medical studies were fundamentally reductionist: looking at lots of people and averaging their individual characteristics, so that, for example in the ASCOT-LLA lipid study (Sever P, Lancet 2003; 361: 1149), a 63.1 yo person, 94.6% white, 18.9% female, having a 24.5% incidence of diabetes, blood pressure of 164.2/95.0, with a median LDL of 212.7, but excluding those with “clinically important hematological or biochemical abnormalities”, has a 36.0% lower relative risk of developing heart disease after 3.3 years on atorvastatin 10mg (and, of course, we will never see that person, and it is in fact a long and tortuous ideological and practical leap to apply these results to the individual in front of us) vs the anthropologic approach of studying a few families intensively over 1-2 years and, by really getting to know and understand them, to generalize these findings to develop larger conclusions about culture. If you ever get a chance to read some of the really old journals from the bowels of large medical school libraries, many of these medical articles were much closer to the anthropologic approach (detailed case studies of a few patients with a particular clinical presentation). Clearly there are advantages and disadvantages of both of these approaches in terms of understanding disease and treatments, especially given our rather limited understanding of the complexity of the human biological/psychosocial systems and their interactions, though EBM aggressively promotes the “reductionist” method. [The clinical case presentations in several of the major medical journals does promote the concept of applying what we have learned in the big studies to individual patients. perhaps this approach should be fostered more, though with experienced clinicians with zero ties to drug companies, etc….]
- But the concept here is: one should be critical of the medical literature, looking carefully at the study design, inclusion/exclusion criteria, funding sources, and, to the extent we can, assess the likelihood of these underlying biases in distorting the conclusions.