Publication Blues

A DISCUSSION OF

Publish and Perish
Read Responses From

In “Publish and Perish” (Issues, Summer 2017), Richard Harris has performed a valuable service by exploring some of the problems currently afflicting science. He identifies academic pressures to publish in high-impact journals as an important driver of the so-called reproducibility crisis, a particular concern in the life sciences. We agree with this assessment and add that these issues reflect problems deep in the culture of science.

Today, a junior scientist is more likely to have a promising career if he or she has published in a high-impact journal a paper in which the conclusions are wrong (provided that the paper is not retracted) than another scientist who has published a more rigorous study in a lower-impact specialty journal. The problem lies in the economy of contemporary science with rewards that are out of sync with its norms. The norms include the 3Rs: rigor, reproducibility, and responsibility. However, the current reward system places greater value on publishing venue, impact, and flashy claims. The dissonance between norms and rewards creates vulnerabilities in the scientific literature.

In recent years we have documented that impact is not equivalent to scientific importance. As Harris observes, some of the highest impact journals have had to retract published papers as a result of research misconduct. When grant-review and academic-promotion committees pay more attention to the publication venue of a scientific finding than to the content and rigor of the research, they uphold the incentives for shoddy, sloppy, and fraudulent work. This flawed system is further encouraged and maintained by top laboratories that publish in high-impact journals and benefit from the existing reward system while creating a “tragedy of the common” that forces all scientists to participate in an economy where few can succeed. The perverse incentives created by this process threaten the integrity of science. A culture change is required to align the traditional values of science with its reward system.

Nevertheless, the problems of science should be viewed in perspective. Although we agree that reforms are needed and have suggested many steps that can make science more rigorous and reproducible, we would emphasize that science still progresses even though some individual studies may be unsound. The overwhelming majority of scientists go to work each day determined to do their best. Science has improved our understanding of virtually every aspect of the natural world. Technology continues its inexorable advance. This is because, given sufficient resources, the scientific community can test preliminary discoveries and affirm or refute them, building upon the ones that turn out to be robust. The ultimate success of the scientific method is sometimes lost in the hand-wringing about poor reproducibility. Scientists test each new brick as it is received, throwing out the defective ones and building upon the solid ones. The ever-growing edifice of science is therefore sturdy and continually reinforced by countless confirmations. Although there is no question that science can be made to work better, let us not forget that science still works.

Chair, Department of Molecular Microbiology and Immunology

Johns Hopkins Bloomberg School of Public Health

Professor of Laboratory Medicine and Microbiology

University of Washington

The case that Richard Harris presents in his article and in his damaging book suffers from three significant problems.

First, the wrong question is being asked referring to statistics about how many results are not ultimately supportable. It’s like asking how many businesses fail versus the number that succeed—far more fail, of course. Does that mean people shouldn’t start new businesses? Does that mean that there must be better ways to start businesses? Do we expect to have a foolproof, completely replicable method of starting a business? Of course not. Science is a process riddled with failure; it is not just a step along the way to eventual success, but a critical part of the process. Science would come to a dead stop if we insisted on making it more efficient. Messy is what it is, and messy is what makes it successful. That’s because it’s about what we don’t know, remember.

Second, those results that turn out to be “wrong” are wrong only in the sense that they can’t be replicated. This is a superficial view of scientific results. Scientific data are deemed scientific because they are in principle replicable—that is, they do not require any special powers or secret sauces to work. Do they have to be replicated? Absolutely not. And most scientific results are not replicated: that would be a tremendous waste of time and resources. Many times the results become uninteresting before anyone gets around to replicating them. Or they are superseded by better results in the same area. Often they lead to a more interesting question and the older data are left behind. Often they are more or less correct, but now there are better ways of making measurements. Or the idea was absolutely right, just that the experiment was not the correct one (there is a famous example of this in the exoplanet field). Just counting up scientific results that turned out to be “wrong” is superficial and uninformative.

The third offense, and by far the worst, is the conflation of fraud with failure. For one thing, this is logically wrong: these actions belong to two different categories, one being intentional and criminal, and the other being unintentional and often the result of attempting something difficult. Conflating them leads to unwarranted mistrust of science. Fraud occurs in science at a very low rate and is punished when discovered as the criminal activity that it is, through imprisonment, fines, disbarment, and the like. This has absolutely nothing to do with results that don’t hold up. They are not produced deceitfully, nor are they intended to confuse or misinform. Rather, they are intended to be interim reports and they welcome revision, following the normal process of science. Portraying scientists as no more trustworthy than the tobacco executives who lied to Congress encourages the purveyors of pseudoscience.

The crisis in science, if there is one, is the lack of support and the resources the nation is willing to devote to training and research. All the other “perversities” Harris claims emanate from that one source. This can be fixed by the administrative people he lists at the end of his article—and unfortunately not by any scientist, leading or otherwise. So why is he casting scientists as the perpetrators of bad science?

Former Chair, Department of Biological Sciences

Columbia University

Cite this Article

“Publication Blues.” Issues in Science and Technology 34, no. 1 (Fall 2017).

Vol. XXXIV, No. 1, Fall 2017