Science: Too Big for Its Britches?
Science ain’t what it used to be, except perhaps in the systems we have for managing it. The changes taking place are widely recognized. The enterprise is becoming larger and more international, research projects are becoming more complex and research teams larger, university-industry collaboration is increasing, the number of scientific journals and research papers published is growing steadily, computer modeling and statistical analysis are playing a growing role in many fields, interdisciplinary teams are becoming more numerous and more heterogeneous, and the competition for finite resources and for prime research jobs is intensifying.
Many of these trends are the inevitable result of scientific progress, and many of them are actually very desirable. We want to see more research done around the world, larger and more challenging problems studied, more science-enabled innovation, more sharing of scientific knowledge, more interaction among disciplines, better use of computers, and enough competition to motivate scientists to work hard. But this growth and diversification of activities is straining the existing management systems and institutional mechanisms responsible for maintaining the quality and social responsiveness of the research enterprise. One undesirable trend has been the growth of attention in the popular press to falsified research results, abuse of human and animal research subjects, conflict of interest, the appearance of irresponsible journals, and complaints about overbuilt research infrastructure and unemployed PhDs. One factor that might link these diverse developments is the failure of the management system to keep pace with the changes and growth in the enterprise.
The pioneering open access journal PLOS ONE announced in June 2014 that after seven and a half years of operation it had published 100,000 articles. There are now tens of thousands scientific journals, and more than 1 million scientific papers will be published in 2014. Maintaining a rigorous review system and finding qualified scientists to serve as reviewers is an obvious challenge, particularly when senior researchers are spending more time writing proposals because constrained government spending has caused rates of successful funding to plummet in the United States.
Craig Mundie, the former chief research and strategy officer at Microsoft and a member of the President’s Council of Advisers on Science and Technology (PCAST), has voiced his concern that the current review system is not designed to meet the demands of today’s data-intensive science. Reviewers are selected on the basis of their disciplinary expertise in particle physics or molecular biology, when the quality of the research actually hinges on the design and use of the computer models. He says that we cannot expect scholars in those areas to have the requisite computer science and statistics expertise to judge the quality of the data analysis.
Data-intensive research introduces questions about transparency and the need to publish results of every experiment. Is it necessary to publish all the code of the software used to conduct a big data search and analysis? If a software program makes it possible to quickly conduct thousands of runs with different variables, is it necessary to make the results of each run available? Who is responsible for maintaining archives of all data generated in modeling experiments? Many scientists are aware of these issues and have been meeting to address them, but they are still playing catch-up with fast-moving developments.
In the past several decades the federal government’s share of total research funding fell from roughly 2/3 to 1/3, and industry now provides about 2/3. In this environment it is not surprising that university researchers seek industry support. It is well understood that researchers working in industry do not publish most of their work because it has proprietary value to the company, but the ethos of university researchers is based on openness. In working with industry funders, university researchers and administrators need the knowledge and capacity to negotiate agreements that preserve this principle.
About 1/3 of the articles being published by U.S. scientists have a coauthor from another country, which raises questions about inconsistencies in research and publishing procedures. Countries differ in their practices such as citing references on proposals, attributing paraphrases of text to its original source, listing lab directors as authors whether or not they participated in the research. Failure to understand these differences can lead to inadequate review and oversight. Similar differences in practice exist across disciplines, which can lead to problems in interdisciplinary research.
Globalization is also evident in the movement of students. The fastest growing segment of the postdoctoral population is comprised of people who earned their PhDs in other countries. Although they now comprise more than half of all postdocs, the National Science Foundation tracks the career progress only of people who earned their PhDs in the United States. We thus know little about the career trajectories of the majority of postdocs. It would be very useful to know why they come to the United States, how they evaluate their postdoctoral experience, and what role they ultimately play in research. This could help us answer the pressing question of the extent to which the postdoctoral is serving as a useful career-development step or whether its primary function is to provide low-cost research help to principal investigators.
The scientific community has fought long and hard to preserve the power to manage its own affairs. It wants scientists to decide which proposals deserve to be funded, what the rules for transparency and authorship should be in publishing, what behavior constitutes scientific misconduct and how it should be punished, and who should be hired and promoted. In general it has used this power wisely and effectively. Public trust is higher in science than in almost any other profession. Although science funding has suffered in the recent period of federal budget constraint, it has fared better than most areas of discretionary spending.
Still, there are signs of concern. The October 19, 2013, Economist carried a cover story on “How science goes wrong,” identifying a range of problems with the current scientific enterprise. Scientists themselves have published articles that question the reproducibility of much research and that note worrisome trends in the number of articles that are retracted. A much-discussed article in the Proceedings of the National Academy of Sciences by scientific superstars Harold Varmus, Shirley Tilghman, Bruce Alberts, and Howard Kutcher highlighted serious problems in biomedical research and worried about overproduction of PhDs. Members of Congress are making a concerted effort to influence NSF funding of the social sciences, and climate change deniers would jump at the opportunity to influence that portfolio. And PCAST held a hearing at the National Academies to further explore problems of scientific reproducibility.
Because its management structure and systems have served science well for so long, the community is understandably reluctant to make dramatic changes. But we have to recognize that these systems were designed for a smaller, simpler, and less competitive research enterprise. We should not be surprised if they struggle to meet the demands of a very different and more challenging environment. For research to thrive, it requires public trust. Maintaining that trust will require that the scale and nature of management match the scale and nature of operations.
We all take pride in the increasingly prominent place that science holds in society, but that prominence also brings closer scrutiny and responsibility. The Internet has vastly expanded our capacity to disseminate scientific knowledge, and that has led many people to know more about how research is done and decisions are made. In rethinking how science is managed and preserving its quality, the goal is not to isolate science from society. We build trust by letting people see how rigorously the system operates and by listening to their ideas about what they want and expect from science. The challenge is to craft a management system that is adequate to deal with the complexities of the evolving research enterprise and also sufficiently transparent and responsive to build public trust.
Saturday Night Live once did a mock commercial for a product called Shimmer. The wife exclaimed, “It’s a floor wax.” The husband bristled, “No, it’s a dessert topping.” After a couple of rounds, the announcer interceded: “You’re both right. It’s a floor wax and a dessert topping.” Fortunately, the combination of scientific rigor and social responsiveness is not such an unlikely merger.