Improving the Quality of Biomedical Research
A DISCUSSION OFEnding the Reproducibility Crisis
In a 1994 editorial in the British Medical Journal on the scandal of poor medical research, the medical statistician Doug Altman stated: “The poor quality of much medical research is widely acknowledged, yet disturbingly the leaders of the medical profession seem only minimally concerned about the problem and make no apparent efforts to find a solution.” A quarter century later, researchers, editors, and funders have all become more aware of and concerned about the magnitude of current problems in the health and medical research system such as (lack of) reproducibility, accessibility, and relevance. In “Ending the Reproducibility Crisis,” (Issues, Fall 2021), Shannon Brownlee and Bibiana Bielekova explore this “long brewing” problem and propose solutions by considering the research enterprise as a system—of researchers, funders, publishers, and others. The authors propose ways to better align the varying incentives, motivations, and support of the research system to improve the quality and reproducibility of research.
Their first proposed solution focuses on aligning incentives with the need for impactful research requiring collaborations, and well-functioning teams. Criteria for hiring and promotion of academics that depend heavily on publications in high-impact journals and numbers of grants, but not research impact, set up the wrong incentives. As the authors state: “Hiring decisions, promotions, tenure, professional stature, and, for many scientists, even salaries depend first and foremost on bringing in grants and publishing papers—rather than producing validated and reproducible results.”
Incentives and assessment are being addressed by several important initiatives. For example, the Declaration on Research Assessment (DORA), a worldwide initiative covering all scholarly disciplines, has set out some general principles for how funders, institutions, and metric suppliers should evaluate the outputs of scholarly research. For assessing researchers for promotion and tenure, the Hong Kong principles provide a more detailed set of supplementary criteria, such as open protocols, materials, and data—and the use of these by other researchers. The authors acknowledge that these are a worthy start, but need to be expanded and scaled up to impact the global research system.
Fixing incentives (and motivation) is one important part of the “equation of behavior change,” as expressed in what is called the COM-B model (Capability + Opportunity + Motivation = Behavior). Improving incentives will help with motivation to improve the system. Capability might be improved through training in better research practices. But the authors also recognize that is limited and slow, so they suggest some innovative tools, using artificial intelligence, to assist researchers and assessment. Their proposed Biomedical Research Network would help assess, synthesize, and connect the emerging research, which would help to orient researchers better, but could also be used as a supplementary research assessment tool. An interesting idea, though I suspect the capability will require some elements of both training and new tools.
The authors’ descriptions of the problem and their broad suggestions make an interesting read. But this is a complex task, which will need considerable developmental and evaluative work and investment to succeed. One group that should read and discuss the proposals is the Ensuring Value in Research (EViR) Funders’ Collaboration and Development Forum, a group of over 40 organizations founded in 2017 to help health-related research funders increase the value of their research. Hopefully we might that make progress with Altman’s suggestion that “we need less research, better research, and research done for the right reasons.”
Director, Institute for Evidence-Based Healthcare
Faculty of Health Sciences & Medicine
Bond University, Australia