Jon Brassey: Threats to traditional systematic reviews

jon_brassey2For many years systematic reviews have been placed on a pedestal, relatively free from critical scrutiny. Frequently seen as being at the top of the “evidence pyramid” they have been adopted as the main way of assessing the worth of an individual intervention.

More recently threats to the pre-eminence of systematic reviews have come from multiple areas. Some authors, including myself, have been critical of groups such as Cochrane for creating methods that are so costly in terms of finance and time that too few are done and the majority are not being kept up to date.

The rise of the “rapid review” is another “threat” to traditional systematic reviews, as increasingly these are being seen as viable alternatives. And, as the rapid review methods mature, they will surely win prominence by their ability to deliver robust results in a fraction of the time of traditional systematic reviews, at a lower cost and be better able to be kept up to date.

However, an increasingly obvious threat is that of reporting bias. Reporting bias, the selective reporting or suppression of information, is increasingly apparent and the evidence mounts as to the effects. There are numerous problems associated with this, for instance:

AllTrials reports that over 30% of trials are unpublished, and when unpublished trials are used in evidence synthesis it can profoundly alter the results of the systematic review. The problem is that the vast majority of systematic reviews do not include all or any of the unpublished studies.

• The basis for most systematic reviews is the journal article, and therefore these summaries of the trial miss lots of important information such as side effects and outcome switching.

The net result is that, for systematic reviews based on journal articles, the results simply cannot be trusted as being an accurate reflection of an interventions “worth” [1, 2, 3]. Being generous we could describe them as supplying a “ball park” estimate; synthesis of the published evidence alone doesn’t support more than that. While some systematic reviews might be accurate, we have no real way of knowing which are accurate and which aren’t. So, if the evidence synthesis is based on published journal articles (the overwhelming majority) – beware.

But this brings us nicely back to the role of rapid reviews. There are a few studies comparing rapid and systematic reviews (based on journal articles) and these have consistently reported very little difference in results [4, 5]. It appears that a sample of published journal articles gives roughly the same results as all the journal articles found in a systematic review (is this really surprising given that sampling is a widely accepted part of biomedical trials?). So, if all you need is a ball park estimate, do it quickly and at low cost. However, if you want an accurate result you really need to go beyond published journal articles. Systematic reviews (based on published journal articles) are caught between two stools, they are not quick enough and they’re not robust enough.

This realisation will surely help us move towards a more nuanced approach to evidence synthesis, one not rooted in attempts to capture all journal articles. This new approach must better articulate why the evidence synthesis is required and build from there. And, the new approach(es) must be based on evidence, not faith.

References: 

1) Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. Turner EH et al. N Engl J Med. 2008 Jan 17;358(3):252-60.
2) Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. Hart B et al. BMJ. 2012 Jan 3;344:d7202
3) Oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments. Jefferson T et al. BMJ 2014; 348
4) McMaster Premium LiteratUre Service (PLUS) performed well for identifying new studies for updated Cochrane reviews Hemens BJ, Haynes RB. J Clin Epidemiol. 2012 Jan;65(1):62-72.e1
5) A pragmatic strategy for the review of clinical evidence Sagliocca L et al. J Eval Clin Pract. 2013 Aug;19(4):689-96.

Jon Brassey is the founder and director of the EBM search engine the Trip Database. In addition to this he works as lead for knowledge mobilisation at Public Health Wales, is an honorary fellow at the Centre for Evidence-Based Medicine, Oxford and recently started the Rapid-Reviews.info website.
He will be on the panel for a discussion of “Improving the Evidence for Systematic Reviews” on Wednesday 22nd June at Evidence Live 2016.

Competing interests: I run the search engine the Trip Database (www.tripdatabase.com) and own 50% of the shares in the company. One area Trip is actively researching is rapid reviews, and in the future this interest may be commercial in nature.