The Changing Temptations of Science

The ethic of discovery that once governed science has evolved into an expectation of impact. The autonomy and integrity of science are now up for grabs.

Science, the Endless Frontier
STEF at 75

The changes in science over the past century have outpaced society’s images of science, of what sort of activity it is, and of what scientists are and do. Have these changes also outpaced science’s capacity to assure its integrity and quality?

At the turn of the twentieth century, literary representations of science lionized the lone genius, often as one who stood against the tide of conventional opinion, such as the hero of Sinclair Lewis’s 1925 Pulitzer Prize–winning novel, Arrowsmith. Although the problem of science’s social organization was challenged in the 1920s and ’30s with “Nazi science” and the rise of Soviet science, the response was a reaffirmation of the idea of science as a vocation carried out by individuals bound by a shared community ethic. The history of this period has been told and retold, but one product has endured. We can call it the liberal theory of science. It was articulated by two physical chemists, Michael Polanyi at the University of Manchester and James Bryant Conant at Harvard University.

The basic elements of this theory, which represented the world of physical chemistry of the 1930s, were these:

  • Scientists were autonomous, in the sense that they were the only ones qualified and empowered to choose their research problems, methods, and the like.
  • Funding for research came through local sources, such as university department budgets, that allowed for this autonomy.
  • Scientists were interdependent in their ability to rely on the validated work of other scientists, and on the informal processes of validation through replication and application that occurred in normal, undirected science.
  • Interdependence allowed for self-policing with respect to fraud, norms of behavior, deference to orthodoxy, and so on.

Of course, this freedom was available to “qualified researchers” who had positions—mostly in universities—that allowed them to run their own labs or use facilities with special equipment. Lab hierarchies, assistants, and organized work were part of normal practice. But scales were small. Big Science was far in the future. There was relatively little pressure to publish, and small scientific communities were bound by personal connections. Scientists of this generation were part of the larger intellectual community of their universities, engaging with larger cultural questions, such as those relating science to philosophy, religion, and culture.

With the Manhattan project, this all changed, and so did what it meant to be a scientist. Big Science and big budgets arrived. The scientists-turned-administrators of science tried to preserve the relaxed world of science of the 1930s in the hyper-organized and high-pressure world of Big Science. Some of this effort was informal: J. Robert Oppenheimer giving Edward Teller a chance to work on the H-bomb when Oppenheimer thought it would never work. But much thought also focused on the problem of how to fund science without top-down direction and decision-making. One idea—directly descended from the nineteenth-century ideal of the individual heroic scientist—was to fund the person rather than the project. As Conant put it, “In the advance of science and its application to many practical problems, there is no substitute for first-class [people]. Ten second-rate scientists cannot do the work of one who is in the first rank.”

From an ethic of discovery to an ethic of productivity …

Big Science, and expanded science, meant big money, and big money meant a need to justify the expenditure. Defending “pure science” became possible on the grounds that its development potentially led to applications, and then to usable technologies. But this argument was made at an abstract level: the lesson of the bomb was that it was not possible to predict what applications and technologies would result from pure science, so it simply needed to flourish. This was the essence of Vannevar Bush’s argument in Science, the Endless Frontier in 1945: fund the best scientists, and whatever comes of their work will automatically, but unpredictably, redound to the nation’s benefit.

But how could a government funding system that was accountable to taxpayers, and to congressional lawmakers seeking a piece of the pie for their own districts, award money to the best scientists while also meeting national needs? At the new National Science Foundation, the institution born in 1950 from the vision articulated in Science, the Endless Frontier, the solution was to provide grants to individual scientists for individual research projects of limited duration, across the entire gamut of the natural sciences. Grants would be competitively awarded; the judgment of peer reviewers could encompass both the promise of the individual project and the record of the individual scientist.

New sources of science funding fueled an expansion of science through the 1950s, and in turn more demand for government support. Sputnik spurred further spending. Growth papered over any problems with the project system and peer review. But by 1965, the science journalist Daniel Greenberg, writing in Science, was calling attention to the dependence of academic science on government largesse in an “interview” with the fictional “Dr. Grant Swinger,” whose “Breakthrough Institute” was “devoted exclusively to fulfilling the public demand for scientific breakthroughs.”

Nonetheless, Edward Shils, editor of the science policy journal Minerva, writing in 1979, could say that the system worked and that there was no reason to distrust it. But by then science had already changed in ways that were difficult to reconcile with the old model of science. Scientific merit was increasingly equated with citation counts provided by the Institute for Scientific Information. Grants were tabulated and universities rated their science departments according to grant money awarded. Shils himself, in a 1970 memo on hiring at the University of Chicago, insisted that grant money should have no role in academic hiring decisions. But this kind of purity was hard to maintain: it was more and more difficult to distinguish merit from grantsmanship. And universities soon found ways to build grant-getting into the reward structure for scientists. Merit, in the sense that counted for career advancement, was being redefined to include entrepreneurship.

Labs grew, and with growth came a stronger sense of responsibility among senior faculty for such things as keeping the lab going, retaining key employees (especially postdocs), and finding projects that would make this happen. Teams mattered more. Grants became ends rather than means. What mattered in the grant system was the judgment of peers, to whom scientists were increasingly bound. The idea of science as a spontaneous order produced by autonomous individuals following their best hunches, the core of the liberal theory of science, became less an accurate description than an expression of nostalgic regret. Research was supposed to be a free market that encouraged risk-taking and innovation. The grant and funding market encouraged competition, but the competition was increasingly for survival in a world where the big grants, which produced the most output, won. Science came to be seen as a system in which inputs needed to be matched by measurable outputs. An op-ed piece in the Washington Post, based on the 1991 Office of Technology Assessment report Federally Funded Research: Decisions for a Decade, asked “How Much is Enough?”

Science had evolved a new ethic: an ethic of productivity. Productivity now meant something different than it did under the ethic of discovery, where what mattered was knowledge and ideas. It meant producing measurable outputs: patents were counted along with citations, and grant numbers were counted as well. And in the face of competition, military and industrial funding became a way to keep university labs going, and a necessity. The change was underscored in 1984, when President Reagan selected Erich Bloch—an engineer, from the private sector, without a PhD—to be director of NSF. Bloch proceeded to create programs that encouraged multidisciplinary team science. The old model of a few “first class” people was a fading memory: the new model moved closer to corporate science. Bloch oversaw the funding of major engineering and science-and-technology research centers at universities, as well as a national supercomputer network. “Economic competitiveness” became the primary public justification for government funding of science.

By the late 1990s, science had become STEM (science, technology, engineering, and mathematics), and had an additional economic justification—as a preferred kind of job training. The promise that STEM education would lead to high-paying (high tech) jobs began to drive national education policies worldwide. Science became central to what was taken to be the technological future—and research training of a more ethnic- and gender-diverse student population was an investment in this future. But this politically powerful promise tied science to immediate considerations of impact and relevance. Congress (and presumably the public) wanted new jobs and economic growth, now.

 … to the expectation of impact

The discovery of DNA’s structure in 1952 and the retelling of that story in the best-selling The Double Helix was a celebration of the discovery ethic. But now it also looks like an apotheosis. Few if any postdocs or assistant professors in today’s science world have the freedom to pursue ideas that the researchers who unraveled DNA—James Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin—had in the 1950s. An assistant professor, now fast becoming a rara avis in science, would be told to focus on publishing the number of papers required for promotion, and to start pursuing overhead-paying federal grants, sooner rather than later. Postdocs, if they had any autonomy at all, would be recruited onto a grant-worthy project that would guarantee future support above all else.

 “Economic competitiveness” became the primary public justification for government funding of science.

The grant system itself became biased toward proposals demonstrating preliminary results, and additionally biased, by design, toward proposals that promised “impact.” Scale mattered. As teams and project budgets got larger, risk-taking diminished. The 1953 paper reporting on the structure of DNA had two authors. The 2012 paper reporting on the discovery of the Higgs boson had approximately 3,000.

These are all changes in the way science is done. They call into question the familiar image of science proceeding through the spontaneous coordination of the efforts of autonomous individuals making their own decisions about what science to do, how to do it, and what to accept or reject. The liberal theory of science certainly had its early critics, such as the Irish crystallographer J. D. Bernal, whose influential writings in the 1930s called for the planning of science. He later argued that it was mere pretense that scientists were autonomous free agents, noting that the competitive grants system was itself planning, but of an unsatisfactory kind, “where prejudice and personal interests, not to say political considerations, have full sway.” Today the point is inescapable: scientists are not free agents, but are part of a demanding and constraining system.

Under these influences, science itself changed. The scientific truths of interest in the impact model are patentable, commercializable, perhaps usable for regulation, or to support a policy or political “objective” in a practical sense (such as more jobs or less cancer). The scientific products are typically models rather than fully developed theories: the need for results means that full understanding takes too long, or is not really possible given the complexity of the topic. Useful models—of a neurodegenerative disease; a sector of the economy; a pollutant in groundwater—that enable prediction and manipulation are sufficient. Because this kind of result is statistical, it is also provisional, subject to revision, and not intended to be the last word: it is a sufficient response to the needs created in a relationship to a funder. These needs pervade the enterprise. Although some of this work is, in the old phrase, curiosity-driven, even curiosity is exercised in response to a need or perceived need.

Bias, rewarded

We can momentarily leave aside the biblical question of what is truth—or, in this setting, the question of what the optimal development of science would be—and ask a different one. Given that science has changed in such a way that the old picture of autonomy, of waiting for the dust to settle on discovery claims, and of unconcern about applications, is no longer accurate, what are the implications for how scientists should conduct themselves?

The older picture of science had an ethic famously summarized in 1942 by the American sociologist Robert K. Merton, developed in response to the idea of Nazi science, and known by the acronym CUDOS—Communism, Universalism, Disinterestedness, and Organized Skepticism. Consider the fate and relevance of these concepts today. The New Zealand-born physicist John Ziman in 2000 revised the acronym to PLACE—proprietary, local, authoritarian, commissioned, and expert—to characterize the new situation. These are terms that characterize the practice of science aimed at “impact,” but are not, as Merton intended, norms that have some authority or provide an ethical compass for the scientist.

What do norms, particularly norms of a particular group in society, do? They exist as a common response to individual urges that are not in the interests of the group. In the case of CUDOS, the urge to have one’s ideas accepted is constrained by the norm of skepticism, which is enforced by a process of criticism, replication, and so forth. Similarly for the other norms: universalism implies that one speaks to everyone, not just a clique of ethnic or political peers, and that one looks outside one’s own network for challenging ideas. Disinterestedness implies that one takes a third-person attitude toward one’s own work—something that goes against the grain and requires intellectual discipline. “Communism,” or “Communalism” as it was sometimes revised to read, implies that knowledge should be available to everyone, and is not owned, and that scientists have an active responsibility to not exclude others from its benefits. One point of these norms was to reinforce the autonomy of science from governments. As such, the norms functioned as instruments of the self-governance of science—internally generated and internally enforced.

Are these norms still relevant today? We can answer this question by first asking another: what are the temptations that need to be restrained today? Temptations arise from the organizational realities of modern science, particularly the need to fund a lab. This need requires a relation with funders, involving some sort of alignment between the aims of the researcher and those of the funder. In the face of intense competition, the work of alignment falls on the recipient to a greater extent than it does the funder. And this means that autonomy is limited to what can be achieved within these relations.

Many temptations arise within these relations, or in connection with them: the temptation to claim impact, to overpromise, to overstate the policy relevance of findings, to sacrifice the pursuit of intellectually promising lines of work to those that can be funded, to produce work that is marketable to funders but scientifically trivial, to leave the tasks of voicing and substantiating skepticism to others, to neglect the tasks of intellectual integration and reflection that don’t have “impact,” and to do just enough to meet the demands and not dig deeper or in directions other than what the funding regime requires. The upshot is this: the norms relevant to these temptations have not developed sufficiently for scientists to be able to insist that they are effectively governing themselves.

In the new system, bias became rewarded. Findings that confirm what a sponsor wants confirmed lead to more funding. And if many people are trying to confirm a result, and the research is statistical, they are highly likely to find what they are looking for. If they don’t, they avoid the penalties by not publishing the results. The coercive effects of brutal competition in the traditional grant system itself, no longer adequately governed by peer-review and individual norms, are not regulated by universities either; on the contrary, universities are incentivized to support and encourage whatever research pays its way, and to not inquire too closely about the details.

Moral injury?

The discovery ethic provided a simple way to relate to the public: discoveries were associated with, and humanized by, their discoverers, who were treated as cultural heroes and made eternally famous by associating their names with the discoveries. Discoveries didn’t need to be sold to the public, or to funders, or to be ranked for impact: they were recognized as achievements. The productivity ethic required a different relation to the public, one which emerged gradually from the idea that the practical value of science—impact—grew out of the development of science as a whole. Bush’s The Endless Frontier played a role in promoting this idea, by asserting that public investment in 1930s-style discovery science would inevitably translate into tangible social and economic benefits. Bush’s intent was to create a political rationale for investment in science as a whole, without reference to standout discoveries or heroic individuals. But the unintended and ironic consequence was that the new model of public investment, by conflating the individualist, liberal model of science with the whole of a rapidly expanding science, made ordinary scientists into heroes. An army of Einsteins would be mobilized. Yet the evolving system made it harder for new Einsteins to emerge—while providing livelihoods for, in Conant’s formulation, the 90% of scientists who could never be Einsteins. The new, ordinary science carried out by this army thus needed a narrative to justify the public investment, and the narrative depended on the rhetoric of ultimate practical benefit.

Are Merton’s norms, which today seem quaint but not dismissible, no longer relevant to a science of productivity? Many—perhaps even most—university scientists are of course motivated by something intrinsic; they work for years to gain the opportunity to pursue their own ideas. But the existence of rigid career structures comes at a price even for those whose main motivations are intrinsic: they must compromise in order to survive. Are scientists conflicted by the reality of the production-function science they must do versus 1930s-style science they thought they were signing up for? Do they feel role strain imposed by the funding environment, or by their own internal values and clock? These questions can be seen in closely related issues of “moral injury” in medicine. As the physicians Wendy Dean and Simon Talbot explain: “The business of health care—the gigantic system of administrative machinery in which health care is delivered, documented, and reimbursed—keeps us from pursuing [our] mission without anguish or conflict. We do our best to put patients first but constantly watch the imperatives of business trump the imperative of healing.”

If there is analogous anguish or conflict in scientific research, what forms does it take, and what does it indicate about the state of science? The “moral injury” movement in medicine cites a conflict between the way physicians were trained and what they were forced to do. In science, the disquieting parallel question is this: has the nation raised a generation or more of scientists who no longer make the distinction between science and the business of science, and therefore are immune from, or experience no, conflict?

The death of neutrality

Looking at some of the pathologies of the present system, the normative part of this story becomes a little clearer. Contemporary science is plagued with crowd-following, where researchers jump onto an approach or topic because that is a good strategy for getting funded. University research offices facilitate this, and metrics encourage it. Are the topics that are made “hot” in this way the most intellectually promising topics? Or are they promising only in the sense that they are more attractive for funding? Do many of these promising topics reflect agendas in which advancing science is an incidental concern (for example, in genomics research related to public health challenges)?

Contemporary science is plagued with crowd-following because that is a good strategy for getting funded.

Maybe crowd-following is not a bad thing. Maybe this is how science responds to societal demands. Scientists are not expected to be social seers, or to invent their own social values. To some extent their technical knowledge allows them to see possibilities that others do not see. But their new role, producers of impact, seems to demand more: that they should themselves be doing societal goal-setting. So there needs to be some normative sense of what is or is not appropriate for science. What are the relevant limits to scientific expertise relative to larger policy goals—or even to the goals of science itself? Here again there are conflicting interests. Scientists may benefit from a large societal commitment to a goal, such as building a complete model of the brain and its processes, or of the climate system. But these kinds of investment decisions happen at a level far above individual scientists deciding what to study—and they are far more consequential.

Modesty about limits is difficult to accept when the fate of the biosphere, or breast cancer sufferers, or America’s economic supremacy, is said to be at stake. But the complexity of the relation between science and the public, as well as the funding system, presents a number of conflicting imperatives and interests. For example, in relation to politics, or the promise of profit, such claims as “the science is settled” and “there is a consensus” carry special importance. Yet the idea of consensus is not a traditional notion in science, and there is no long history to learn from. On one hand, such claims are a source of power—science speaking with one voice rather than many. Yet the pressure to maintain consensus may make scientists in the current peer-review regime reluctant to criticize other scientists or make controversies public, or to air the uncertainties that they privately acknowledge. Dissenters may be penalized by rejection of their grants and papers, and blocked promotions.

Has the traditional responsibility to offer and respond to criticism been blunted? Seeing open scientific controversies resolved by discoveries—as the solar neutrino controversy was in the early 2000s—provides some justified faith in the slow processes of science, despite the four decades it took (during which the peer reviewers and commentators largely shared the same mistaken assumptions). But neutrinos—subatomic elementary particles that are electrically neutral and have almost no mass—carried with them no urgency, no political or economic stakes. Premature announcements of cancer cures, environmental apocalypse, or nutritional miracles lead to justified skepticism on the part of the public, and to suspicion about scientists’ motives as well.

If science purports to be institutionally neutral (or disinterested, as Merton would have it), some detachment from policy, such as a shared norm to speak only about the narrow facts of one’s own science when one is speaking as a scientist, is necessary. But as scientists are called on to pronounce on matters beyond the narrow sense of what can be or has so far been established scientifically, and choose to do so themselves, they must find a way to balance their claim to expertise and the public’s desire for a message. Some climate scientists have softened their public messages because they realize that apocalyptic scenarios tend to be discounted. But this is just an attempt to message more effectively: it does not address the question of when experts have gone beyond their competence, or the data, in interpretation. Here again, the development of norms lags behind the conflicts, and there are no simple guidelines for “responsible” behavior.

Such tensions and dilemmas were not a part of 1930s-style discovery science. Presently, they are pervasive. Norms have yet to catch up with changes in the organization of science—its funding, hierarchy, division of labor, and politics. Sociologists and criminologists use the concept of social control to characterize the way in which formal and informal mechanisms work together to shape behavior. Economists focus on incentives. If the structures of support for science—in the private sector, in public-private university partnerships, and in the regulatory science realm—provide incentives that overwhelm the traditional social controls of science, the only backstop is the system of social controls beyond science, through the marketplace, investigative journalism, the canon of legal and patent law, or the regulatory apparatus pertaining to securities law and fraud.

Not surprisingly, then, social controls from outside science are beginning to kick in. The Theranos scandal, involving a start-up company pitching a supposedly revolutionary blood-testing technology, was uncovered not by the scientific community, or even the Food and Drug Administration, but by the investment community and a crusading business journalist. The relationships between a leading Harvard chemist and Chinese research institutions was not uncovered by Harvard but by the FBI. The poor quality of preclinical cancer science was not exposed through peer review but by testing done in pharmaceutical corporations. “Compliance” with institutions outside science begins to replace internal constraint.

The costs of success

We can look back regretfully to the world of the last half of the twentieth century, in which we continued to believe that any group of young, largely unsupervised scientists could pursue an idea like the structure of DNA, even as the reality seemed to be that writing another paper and collecting citations fulfilled one’s responsibility to science and society. Or we can recognize that this was a transitional world, in which science didn’t matter as much, in which the technology needed for science was primitive but accessible to many people, and in which science was funded on faith, premised on the success of the Manhattan project, the excitement of space exploration, the fears of the Cold War. But now we live in a world of impact statements, and where a scientist encounters queries such as this: List up to five examples that demonstrate the broader impact of the individual’s professional and scholarly activities that focus on the integration and transfer of knowledge as well as its creation. Science has evolved in ways that seem to answer the question that has always plagued the pursuit of “pure science”: what justifies the use of scarce resources that could benefit others?

Science has changed, and as with any other transition, there are downsides, bumps in the road, and failures. But the big picture is hard to deny, and there is no going back. Is this the inevitable conclusion to the story of science? It is certainly one anticipated 40 years ago by the “finalization of science” debate in Germany. The finalizers, such as the German philosopher Gernot Böhme, held that the era of liberal pure science, what they called the exploratory phase, was past, and that science was at a stage of solid theories—it was no longer exploring competing paradigms and theories—and thus could now be turned to social ends. At the time, this notion was vigorously resisted. Today, the basic sentiment is widely accepted, in practice, if not in theory.

Science has changed, but we can ask how much of the outcome we have described is the product not of an inevitable maturation of science, but of choices. And we can also ask whether the choices are an acceptance of defeat—an acceptance of science as concerned primarily with “impact,” constricted in its vision by the funding mechanisms available to it, wedded to expensive technology and measurable outputs, made conformist by peer-review and intense competition for funding. And we can even ask whether a different world can be created, one in which postdocs could spend years in exploration of fundamental questions free from the expectation of impact, from the pressure of landing a tenure-track position, and from the relentless demand for production. That might be better for job satisfaction, but would it be better for science?

The science that society has chosen to create through its choices of institutional practices is certainly “better” in some senses. It is better at delivering usable scientific goods to the market, to the state, to the media, to decision-makers of all sorts, and to citizens. But to prioritize this “better” over discovery has the effect of crowding out certain possibilities that are also valuable: the possibilities of exploration that came with earlier, smaller, slower forms of science. And, it should be emphasized, with science that expected little if any public funding, and thus incurred little obligation to justify itself to the rest of the world. We need to ask: Is the disappearance of exploratory science a result of science’s natural growth and evolution? Or is it the result of the structures of institutional science themselves? And, if the latter, should the loss of such a science be tolerated?

In the old regime, scientists benefited from adhering to the Mertonian norms, and could justifiably claim the public’s trust. Conflicts of interest could be controlled by submitting claims to scientific scrutiny, and an incentive to scrutinize was built into the internal competition of science. The new regime of quantitative accountability invites cheating, crowd-following, conformity, lapses in quality, and subservience to sponsors whose funds make it possible to compete. Competition—for funding and results—is accepted as a given, but it rewards too few and takes a personal toll on too many.

One price to be paid, as the demands on science turn into science on demand, is the surrender of individual autonomy. It can be asked whether autonomy is still needed, or do scientists already have all the autonomy and all the externally imposed accountability rules that science needs? Yet there is a larger price for this stance. Trust in science drops as old, internal norms, such as the taboo against speaking as a scientist on politics, or crossing the line between science and policy or between science and profit, are obliterated—or, indeed, as the incentives to cross those lines increase. The farther science reaches into the domains of policy and profit, the more it must rely on mathematical models and statistics, and therefore on assumptions, and therefore on hidden values, to deliver timely and relevant results. And the more contestable those results become. As the uses of science and the statements of scientists are entangled with controversial politics and profit motives, the more scientists’ special status as experts is compromised. Worse, as issues arise over the quality of research at the base of drug approvals, policy recommendations, and public reporting of science, scientists themselves no longer know where the lines are.

Owning up

Accountability, especially in the form of quantifiable, and therefore manipulable, criteria, is not an answer to the problems that result from this change: accountability is the substitution of external for internal controls. Scientists themselves need to be the source of new thinking on this subject. Given the complexity and scale of their enterprise, can scientists find a way to assert a new and compelling identity, one that at least partly counters the temptations that increasingly influence their work? The last big shift was the result of the atomic bomb: a new sense of social responsibility, and of the role of science, came out of it, with mixed but powerful results. Scientists then did gain a new sense of their place in the world. This place has changed again, but the self-image of science and the standards of conduct and collective behavior appropriate to it have yet to catch up.

One price to be paid, as the demands on science turn into science on demand, is the surrender of individual autonomy.

Among the many indications of disquiet about the present state of science is the observation that progress on fundamental issues has stalled. As Ross Douthat observes in his new book, The Decadent Society: “Fewer blockbuster drugs are being approved, but last month still brought news of a steady generational fall in cancer deaths, and a possible breakthrough in cystic fibrosis treatment. Scientific research has a replication crisis, but it’s still easy to discern areas of clear advancement—from the frontiers of Crispr to the study of ancient DNA. But the trends reveal a slowdown, a mounting difficulty in achieving breakthroughs—a bottleneck if you’re optimistic, a ceiling if you aren’t. And the relative exception, the internet and all its wonders, highlights the general pattern.” Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies who studies quantum gravity, makes a similar point about her field: “In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed,” despite a few experimental confirmations. “But all shortcomings of these theories—the lacking quantization of gravity, dark matter, the quantum measurement problem, and more—have been known for more than 80 years,” she continues. “And they are as unsolved today as they were then.”

We can ask the question scientists from the 1930s might have asked about all this: does the hyper-competitive nature of science itself, and the terms under which this competition now takes place, and the enormous scale of the endeavor, interfere with the optimal development of science? People such as Conant had emphasized that new ideas in science were often accepted and recognized only when their time had come. Urgency was not a part of science: only time would tell. Replication took effort; integration into the body of scientific knowledge and critical thought demanded it. If Conant’s cohort were to return today, they would be impressed with the scale and power of science. But they would not be surprised at its failures under a funding regime they always had doubted.

Vol. XXXVI, No. 3, Spring 2020