Kamal R Mahtani on utilising systematic reviews: Is another trial necessary or ethical?

Kamal R MahtaniYou don’t have to look too far to see the benefits of systematic reviews and their summary results. The well known Cochrane logo depicts a real example, highlighting the value of systematically pooling data for meta-analysis and in this case demonstrating the clear benefit of corticosteroids in accelerating lung maturation in preterm babies.

Systematic reviews can also protect patients from harm. A systematic review of rosiglitazone, which was developed to treat type 2 diabetes, showed an increased risk of myocardial infarction. The review formed part of the evidence that ultimately led to the suspension of the marketing authorisation for rosiglitazone by the European Medicines Agency, despite the drug having been available for over 10 years.

So the production and use of systematic reviews to inform clinical decision makers is both appropriate and well supported.

Extending the use of systematic reviews
With clinical needs and finite budgets dictating the priorities in clinical research, systematic reviews can also reduce research waste. As pointed out by Chalmers and Glasziou: “New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence.” There are reasons for this, most importantly that unnecessary clinical trials can harm patients and waste resources.

Take, for example, the drug rofecoxib (Vioxx). Originally marketed as a safer alternative to existing non-steroidal anti-inflammatory drugs, it was withdrawn in 2004 after concerns emerged of a higher risk of cardiovascular events, notably myocardial infarction. A systematic review of published clinical studies of rofecoxib, conducted before the September 2004 withdrawal, identified 18 randomised controlled trials, all sponsored by the manufacturer. Cumulative meta-analysis of these trials showed that had a systematic review and meta-analysis of accumulating evidence been conducted by the end of 2000, it would have been clear that rofecoxib was associated with a higher incidence of myocardial infarction. So several thousands of participants, in the studies conducted after 2000, were randomised into trials when a clear harm could (and should) already have been detected. Not to mention the wasted costs of running those additional trials.

Research that adds value
Identifying or carrying out a systematic review, before embarking on any new primary research, is increasingly seen by many research funders as an essential early step. One of the largest funders of research in the UK is the NHS National Institute for Health Research (NIHR). The NIHR makes the production and promotion of systematic reviews a key investment in its infrastructure.

“By removing uncertainties in science and research, systematic reviews ensure that only the most effective and best value interventions are adopted by the NHS and social care providers.” Professor Dame Sally C Davies, director general of research and development, Department of Health

Prospective applicants to NIHR funding are offered guidance notes to ensure that all primary research is informed by a review of the existing literature. This may include identifying relevant, existing systematic reviews or carrying out an appropriate review and summarising the findings for the application. Researchers who identify a clear need for new studies should use information gained from their systematic review to inform the design, analysis, and conduct of their study, as part of the “Adding Value in Research Framework.”

Utilising systematic reviews to inform new research is good practice
An earlier survey on the utilisation of Cochrane reviews in designing new studies showed that the proportion of study investigators using them was very limited. Only 11 of the 24 authors who responded to their survey were aware of the relevant Cochrane review at the time they designed their study. However, this has improved as a greater understanding of the need to begin (and end) new research with a systematic review has become more apparent.

A recent review of two sets of NIHR Health Technology Assessment (HTA) funded trials (those funded between 2006 and 2008 and those funded in 2013) sought to identify whether trial planning and design were informed by systematic reviews. The authors extended the definition of a systematic review to include the following:

• Cochrane systematic reviews;
• other reviews if “systematic review” was mentioned in the title and the methods specified that a systematic search was conducted;
• National Institute for Health and Care Excellence (NICE) Technology Appraisal Guidance documents (TA), which include Technology Assessment Reports (TAR) based on reviews of the clinical and economic evidence (i.e. cost-effectiveness assessments).

Only five of the 47 trials that were funded in 2006-08 (cohort 1) did not refer to a systematic review, for which the authors found plausible reasons. All the trials that were funded in 2013 (cohort 2) were informed by systematic reviews. A large change from before then, and perhaps not surprising, given the requirements of the funders.

However, few studies have explored how researchers use systematic reviews when planning new trials. The reasons the analysis found were varied, but by far the most common reason was to justify treatment comparisons. Other reasons included obtaining information about adverse events; defining outcomes; and aspects of study design, such as recruitment and consent.

Conclusions
The use of systematic reviews to inform new research is not without limitations. For example, we cannot guarantee that using a systematic review to inform new research automatically generates higher quality trials and more reliable outcomes. Judging the point at which justified replication is needed before it becomes wasteful duplication can be challenging. It has also been argued that reviews of small, poorly conducted, single-center trials exaggerate treatment effects, not seen in subsequent larger well conducted trials. However, as has been pointed out already, funders need assurance, even from reviews of smaller trials, that there is a need to support further research. And a systematic review can provide this, as well as information to inform the design of the new research.

Scientific history already contains examples where a failure to consider, conduct, and use systematic reviews has led to patients being exposed to potential harm, as well as the waste of resources in carrying out unnecessary clinical trials. Researchers applying for funding of any new primary study should therefore ensure that they are well aware of existing evidence and the implications to their proposed work. Indeed, it would be “ethically, scientifically, and economically indefensible” not to.

Kamal R Mahtani is a GP and deputy director of the Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford. You can follow him on Twitter at @krmahtani 

Competing interests: I have read and understood BMJ policy on competing interests. I declare that a significant proportion of my research has been supported by the National Institute for Health Research. I am also an unpaid member of the NIHR HTA Primary Care, Community, and Preventive Interventions Panel (PCCPI), which supports the NIHR HTA Programme. I have no other competing interests to declare.

Disclaimer: The views expressed are those of the author and not necessarily of any of the institutions or organisations mentioned in the article.