If reporting guidelines and checklists are the answer, what is the problem? That’s easy: their development was motivated by the realization that critical information was vague, missing, or misreported in an unacceptably high proportion of published medical research papers. Reporting guidelines take aim at this problem by specifying a minimum set of items that should be included in a published study report. These, of course, depend upon the study type, so there are different checklists for different sorts of research. The grand-daddy of them all is the CONSORT checklist, developed in 1996 to guide reporting of randomized controlled trials.
A typical checklist, such as this one, includes a list of numbered topics to be covered in the report, accompanied by a brief description of what is expected. In most cases, there is also a line following each item where the author is supposed to indicate the page number in the paper where the required information can be found. Most reporting checklists have come to be known by their acronyms, at least among the cognoscenti, for whom “STROBE” does not indicate a type of lighting nor “SQUIRE” a minor nobleman. CONSORT, for example, stands for “Consolidated Standards of Reporting Trials.”
Once CONSORT was out, the rush was on. No field of research was so arcane that it did not seem to need its very own checklist. Qualitative research? You want the COREQ checklist (Consolidated criteria for reporting qualitative research). But if you are synthesizing qualitative research, you need ENTREQ. (That’s short for Enhancing Transparency in Reporting the Synthesis of Qualitative Research). CONSORT itself spawned numerous extensions, which aimed to adapt CONSORT to specialized sorts of trials: Consort Harms, Consort Herbal, and Consort Non-inferiority, to name a few. There are so many checklists that keeping track of them is a full time job and there is a special website, the useful EQUATOR network, devoted to keeping things organized.
The virtues of checklists are extolled by everyone from Atul Gawande in his popular book The Checklist manifesto to aviation and anesthesia safety authorities. Their value stems from the simple fact that no matter how expert you are, it is easy to forget to do things that should be done. Unless, that is, you have a list and use it every time. This idea has face validity. Who has not ruined an ostensibly memorized recipe by forgetting to add salt or some other crucial ingredient? Good evidence shows that checklists, wisely used, improve medical care and save lives. They also improve the quality of research reporting.
Because of this, research checklists are a staple of editorial life at The BMJ and other journals. If a researcher submits a study without a checklist, we send it back with instructions to complete the relevant form. The BMJ is not alone in requiring authors to submit checklists. Most high impact journals endorse and require them, and smaller subspecialty journals increasingly are following suit.
All of this is well intentioned, but is the good that reporting checklists do unalloyed, or are there harms, burdens, or unintended consequences that have been overlooked? My nonsystematic searches of Pubmed and Google turned up no serious research into possible harms, but did identify lots of theoretical problems. One author wrote that checklists could be taken too far and noted that “… many aspects of modern life suffer from too many checklists. Teachers, for example, are shackled to lists and protocols that prevent them from doing their jobs properly…” Too many checklists, he suggested, are a form of excessive bureaucracy that work against spontaneity, imagination, and creativity. In an essay titled “Reality Check for Checklists” Peter Pronovost and colleagues warned of the “the risk that checklists may lead to complacency and a false sense of security” since they are only simple reminders and must be “coupled with attitude change” to be anything other than just “a great story.”
Some of these charges ring true. Several times a week I field questions from beleaguered authors who are trying to submit their research paper to The BMJ but cannot find a research checklist that seems right. Despite a seeming plethora of checklists, there sometimes isn’t one that clearly applies to the specific work a particular author has done. This is most often the case with journalology or other meta-research studies. We must then choose either to do without a checklist or adapt an existing one as best we can. I usually suggest to authors that they should do their best to complete the checklist that seems most relevant to their study, simply writing “not applicable” for items that do not apply. This is not entirely satisfactory.
Some checklists are not easy for beginners to interpret, which is a problem because their completion is often assigned to a very junior member of the research team. It is surprisingly common to find that the wrong checklist has been submitted with a paper. I recently handled a report of a randomized controlled trial that had been submitted along with the COREQ checklist for qualitative studies. It is hard not to have doubts about a paper when its authors cannot correctly identify the study design of their own research project. It is not unusual to open a checklist and find that instead of indicating the page number in the manuscript where the relevant information can be found, authors have instead simply put a check mark or “x” to indicate their compliance with an item. Other authors misinterpret the request for a checklist to mean that they only need insert a statement somewhere in the paper saying that they “followed” a checklist. Still others cut and paste large chunks of text from their papers and pasted them into the tiny column where page numbers should be listed.
There are other drawbacks to generic checklists. Although they remind people of many things that should be in a paper, they cannot take account of crucially important items that are unique to a particular study or important in a specific context. Worse, authors may assume that anything not expressly required by a checklist can or should be omitted. More than once I have asked authors to include additional details in a paper only to have them respond that the information “isn’t required by CONSORT.” There also is a danger of complacency on the part of authors, editors and reviewers, who might think their work is done if they are able to match a checklist item to a portion of the paper. Additionally, checklists impose yet another burden on authors who are already encumbered by the many other requirements associated with submitting a paper to a journal.
Yet these disadvantages are far outweighed by the many benefits of checklists, which have been a powerful force for good in the world of medical research. For starters, they are the finest form of quality control we have. They help to reduce “unexplained variation” in the reporting quality of research papers, which is certainly as bad as unexplained variation in clinical care. They also educate researchers, reviewers, and journal editors about what constitutes good research; over time these lessons may be absorbed. Most importantly, insistence on checklists sends a powerful message about expectations and priorities: a high standard of research reporting is required, not optional. Ultimately, this generates important moral pressure for better behavior on the part of everyone involved in medical research. A checklist too far? I think not.
Conflicts of interest: On behalf of The BMJ, I have participated in the development of several research checklists and guidelines, including SQUIRE, the PRISMA Harms extension, the Ottawa statement on the ethical design and conduct of cluster randomized trials and CHEERS. I also serve on a working group that seeks to establish 2016 as the “Year of Reporting Guidelines.”
Elizabeth Loder is the acting head of research, The BMJ.