Culture Research

The Vaccine RaceVaccinations, now a cornerstone of public health programs worldwide, are one of science and public health’s most impressive success stories. Ironically, it is precisely because immunization is so effective at preventing disease that we take for granted our modern invulnerability to infections that plagued the generations before us. But vaccination coverage rates have fallen in recent years in the United States, and as a result, outbreaks of vaccine-preventable diseases increasingly make headlines. The Vaccine Race is a timely and vivid reminder of what came before, and what a remarkable human achievement it is to cheat nature—to outsmart a virus.

In her debut book, science reporter Meredith Wadman describes the pursuit of a rubella vaccine through a series of intertwined narratives about a bright young crop of ambitious scientists in mid-twentieth-century America. Wadman’s narrative spans nearly 50 years and several institutions and tells the story of several vaccine candidates. But it focuses primarily on three characters: Leonard Hayflick, a brilliant but stubborn cell culturist; Stanley Plotkin, a physician bent on discovering a rubella vaccine in time to stave off an impending epidemic; and Hilary Koprowski, their colorful, visionary boss and head of the Wistar Institute in Philadelphia.

Hayflick was instrumental in the birth of WI-38, the first human cell line derived from normal tissue able to be grown in a laboratory. Although scientists had previously been able to maintain human cells in the laboratory, the cells would often become cancerous, either by picking up genetic abnormalities as they replicated or as a result of harboring undetectable cancer-causing viruses. The advent of Hayflick’s new human cell lines represented a nearly infinite supply of “clean” cells for growing the viruses necessary to manufacture vaccines. Plotkin and Koprowski would do just that: in 1969, they developed the RA27/3 rubella vaccine, the first of many vaccines using WI-38 cells.

The importance of these discoveries is clear today, but they were hardly met with universal acceptance by the scientific community at the time. Hayflick’s cell culture research was consistently undervalued, and there was intense controversy over whether human cell lines should be used in vaccine development at all. But as the book’s subtitle suggests, Wadman’s central thesis transcends the science of the vaccine, focusing on a theme that continues to resonate today: the inevitable intersection of science, politics, and culture and the messy but remarkable process of scientific discovery.

Although the story doesn’t focus exclusively on Leonard Hayflick, he is perhaps its most memorable character. Extraordinarily bright but chronically underappreciated, his careful research would lay the groundwork for the development of vaccines that would protect hundreds of millions of people against polio, rabies, measles, varicella, hepatitis A, shingles, and adenovirus. But he was, first and foremost, a basic scientist, who was interested less in the finer points of vaccinology than the simple question of whether it was feasible to develop normal human cell lines for use in the laboratory. After establishing the WI-38 cell line, he would go on to forever change the field of cell biology. In 1965, he published a seminal paper describing what is now known as the Hayflick limit, an upper bound on the number of times a normal human cell population will divide. The paper had wide-ranging implications in a number of fields, including cancer biology and aging.

Hayflick’s career, however, was tarnished by his involvement in a protracted dispute with the Wistar Institute and the National Institutes of Health over claims to the WI-38 cells. Though Hayflick had been party to agreements stating that he did not have exclusive rights to the WI-38 cells, he insisted that their rightful ownership was in question. When he left the Wistar Institute for a professorship at Stanford University in 1968, Hayflick took matters into his own hands: he packed up his two children and drove to California, taking along his entire collection of WI-38 cells. Once at Stanford, Hayflick began selling the cells to pharmaceutical companies at a profit. In the lawsuit that ensued, Hayflick’s stubborn self-righteousness and general lack of cooperation irreparably damaged his career.

The episode showed Hayflick to be a bit too enterprising for the sensibilities of the scientific community at the time, which dictated that scientists be driven by passion for discovery rather than profit. (When Hayflick’s contemporary Jonas Salk was asked who owned the patent to his polio vaccine, Salk famously responded, “There is no patent. Could you patent the sun?”) The notion of developing biological products to be sold for financial gain was anathema to most in Hayflick’s field, and throughout the course of what turned out to be a very public dispute, Hayflick came to be regarded with contempt by many of his peers.

The controversy over ownership and commercialization of Hayflick’s WI-38 cells brought to the fore an issue that the scientific community would soon have to face head-on: that science and business are not, as many purists had long maintained, incompatible. Moreover, in the face of slowing growth of government funding for scientific research in the 1970s, commercialization represented an important source of financial support for the nascent but expanding field of biotechnology. Wadman paints the Hayflick case as a watershed event that raised important legal and ethical questions about who owns and can profit from biological products and, more broadly, about the role of scientists in bringing their discoveries to the market.

Wadman takes an even-handed, journalistic approach to tackling several thorny ethical issues around vaccine development and testing. The quest to find young cells to supply Hayflick’s cell culture experiments saw one of the first uses of human fetal tissue for research. Since fetal tissue was difficult to obtain for research in the United States, Hayflick turned to Europe, where countries operated under slightly more relaxed rules. Wadman chronicles the long journey of the tissue that would eventually give rise to WI-38, from the fetus of an unnamed woman in Sweden to Hayflick’s laboratory in Pennsylvania. The Swedish woman who donated the tissue after undergoing an abortion—she is referred to as “Mrs. X” in the book—did not know until many years later of its eventual whereabouts or its use. Wadman draws the apt comparison to the story of Henrietta Lacks, whose tissue was used without her consent to create the HeLa cervical cancer cell line around the same time.

Although none of Hayflick’s work was illegal, it was done with an air of secrecy to avoid unwanted attention from anyone who might look askance at his experiments. Though research involving fetal tissue is now highly regulated, it remains a lightning rod for controversy. Beyond whether the research itself is acceptable, the issue of exactly how far an individual’s ownership of his or her tissue extends—for example, whether donors should have any financial claim to biological products produced using their samples—remains a contentious one.

Once researchers developed potential vaccine formulations, they had to test them. In the absence of regulations governing consent of research subjects, a range of institutions—women’s prisons, orphanages, and homes for the mentally disabled—became testing grounds for the safety and efficacy of several potential rubella vaccine candidates in the 1960s. Wadman recounts disturbingly little regard for the strict informed consent procedures so critical to the integrity of biomedical research today. (Legislation governing consent in research studies, resulting from revelations involving gross misconduct in the Tuskegee Syphilis Study, did not emerge until the mid-1970s.) Although Wadman’s descriptions of these studies are enough to make a modern-day scientist cringe, her tough but balanced treatment of these issues is commendable. She applies a highly critical lens to these practices, noting that they were reprehensible irrespective of their legality. To this day, these episodes remain a shameful stain on the eventual success of the rubella vaccine.

In Wadman’s telling, the human costs of the pursuit of a rubella vaccine take several forms. The first is the many individuals who volunteered their bodies, knowingly or unknowingly, to develop the WI-38 cells and to test the vaccine. The second is the self-sacrifice of the many scientists—some recognized by history, others not—who spent entire careers seeking to keep the rubella virus at bay. The third may be more abstract: the children for whom the rubella vaccine, because of the political roadblocks it encountered, was heartbreakingly too late.

During the 1950s in the United States, rubella caused epidemics every several years that were met with generalized terror among pregnant women. Rubella infection could not be diagnosed, but was almost certain to cause severe birth defects if an expectant mother contracted it in the first trimester of pregnancy. Wadman recounts vividly the story of Steven Wenzler, born to a mother who contracted rubella during her pregnancy only a short time before the vaccine was available, who suffered serious cognitive and physical deficits that would profoundly influence the course of his life. She reminds us how truly frightening it is to be defenseless against a disease that we cannot detect or treat. With the emergence of several disastrous pandemics over the past few years, perhaps we have a sense of what this fear may have been like. And as the scientific and public health communities scramble to develop and responsibly test vaccines for new viral threats, including Ebola and more recently Zika, Wadman’s commentary on the “human costs” of similar previous efforts is particularly resonant. One thing is certain: we will continue to confront threatening epidemics of diseases, old and new, and it will be incumbent upon us to have learned from our past experiences.

Wadman’s writing is what makes the book a standout, and it’s a testament to her skills as a storyteller that it reads considerably more like a detective novel than nonfiction science history. Wadman is as adept telling a story set at the lab bench as she is about the trials of Steven Wenzler and his family, and this makes for a dramatic and highly engaging story. The book is meticulously researched, and though the story has an almost overwhelming number of characters, a helpful glossary in the back ensures that readers can always track backward to refresh their memories if needed. Wadman balances her more dramatic flourishes with clear, jargon-free descriptions of cellular biology and immunobiology, managing to make accessible even the most technical of concepts.

The Vaccine Race tells a compelling and highly accessible story about the extraordinarily hopeful but fiercely contentious nature of scientific research. At its core is a story about the profound impact that passionate and brilliant—but, to be sure, flawed and often biased—scientists can have on the human condition. We don’t see this nuanced portrayal of scientists enough, especially in a manner artfully tailored to appeal to a wider audience. It acknowledges that science is done by humans, and therefore it will never be perfect. In fact, the scientific process is in many ways a reflection of both our most industrious, inventive selves and our many shortcomings, but it also serves as an objective measure of our immense capacity for progress.

The End of the Line

Cycles of Invention and DiscoveryInnovation is almost universally desired but almost always misunderstood. Confusion abounds over such basic tasks as how to describe how innovation works and even what counts as innovation. If culture conditions innovation, as surely it must, then can some national and subnational cultures possess more innovative capacity than others? How much does geography matter? Is innovation in digital electronics fundamentally different than in, say, energy, transportation, or biopharma? Is it possible to speak of social and technological innovation in the same breath? What does it mean to go from “imitation to innovation,” as South Korea’s national champions have done, and yet still insist that science and the discovery of new knowledge decisively contribute to technological advance and human well-being?

Fortunately for innovators, whether situated in universities, industry, or civil society, advances across a range of fields can occur without ever answering any of these questions. More urgent than a fundamental understanding of innovation in all its marvelous forms is public understanding of and appreciation for the relationship among public funding, government policies, and innovation outcomes. Whereas scholars take for granted that government is often the handmaiden of vital innovations, the case for the centrality of public funds is today battered and beaten by conservative and corporatist critiques that insist government spending on research and development too often amounts to a form of glorified welfare for scientists and engineers isolated from markets.

Concerned by what they consider to be weak outcomes from publicly funded science and engineering, Venkatesh Narayanamurti and Toluwalogo Odumosu have produced a necessary new book on the politics of research, Cycles of Invention and Discovery: Rethinking the Endless Frontier. In their wide-ranging, well-documented, and deeply informed analysis, the two scholars of innovation effectively demolish the so-called “linear model.” In this conceptual framework, technological innovation begins with basic research—often in a scientific laboratory—and moves to applied research and engineering, followed by diffusion of the innovation. In debunking this model, the authors draw on evidence from the history of science and technology as well from detailed accounts of private-sector innovation. Arguing that government funders and policy makers remain devoted to a false, unidirectional understanding of the flow among science, engineering, and innovation, they deliver the bracing conclusion that federal research policy—and some significant funded research—”has become so divorced from actual practice that in many cases it is now an impediment to the research process.”

The authors maintain that rather than science serving as nourishment for engineering, some forms of research, whether done by scientists, engineers, or even use-inspired amateurs, “proceed interactively.” Citing seminal papers by the historian Edwin Layton in the 1970s on “technology as knowledge,” they persuasively argue that many crucial inventions in the past “reached relatively advanced stages of development before detailed scientific explanations about how the technologies worked emerged.”

Some of their most illuminating examples come from the co-evolution of physics and electrical engineering in the twentieth century. For instance, a multidisciplinary team at Bell Labs, the research arm of AT&T during the company’s decades as a telephone monopoly, created in 1947 the integrated circuit, which became the building block for digital computers and the massive semiconductor industry. In another case, the legendary applied mathematician Claude Shannon, pursuing ways to expand AT&T’s capacity, made a seminal breakthrough in information theory that had wide conceptual and practical applications. In charting the effect of advances in physics, electrical engineering, and applied math on digital innovation, the authors argue that the linear model is clearly inaccurate, because the “boundaries are porous” between physics and electrical engineering and “research trajectories in either field can intersect and bisect each other.”

Cynicism in the United States about the value of government sponsorship for innovation was not always as high as it is today. Beginning in 1940, the federal government began aggressively promoting science and engineering. Significant outcomes came in waves during World War II and in the decades following the war. Computers and information processors, vaccines and novel medical therapies, jet engines and space travel, and of course nuclear weapons were some of the results of sustained spending by government on research. Distinctions between “basic” science and “applied” engineering became political terms of art, useful for justifying large appropriations of public money, the authors write, but at best “a very partial and incomplete picture of how the science and technology enterprise functions.”

Narayanamurti and Odumosu, who unabashedly declare they wish to “hasten” the “demise” of the pure/applied distinction, want to blame Vannevar Bush, the electrical engineer who served as the first presidential science adviser, for successfully promoting this distinction in his highly influential 1945 report, Science, the Endless Frontier. But as the historian Ronald Kline argues in an article on the public rhetoric of US scientists and engineers from 1880 to 1945, which Narayanamurti and Odumosu draw on heavily, leading engineering societies in the 1920s and 1930s advocated loudly for the linear model and the pure/applied distinction in research, going so far as to insist, as Kline writes, that “applied science itself will dry up unless we maintain sources of pure science.” (Ronald Kline’s latest book, The Cybernetics Moment, is reviewed by David Auerbach in this issue.)

Just as Bush was not the only leader who made a pragmatic decision to present science as the source of new knowledge and technology as the application of this knowledge by engineers, attacks on the linear model are not new. Since the economist Richard Nelson published his influential The Moon and the Ghetto in 1977, scholars and policy makers have fretted that producing more science did not automatically, or even ever, result in practical solutions to urgent social problems, such as improved health, education, and environmental quality. But rejecting the linear model, though amply justified conceptually, is not the same as identifying a replacement. Science and engineering contribute to innovation, in different ways and to different degrees, depending on the situation. The making of the atomic bomb, for instance, relied on theoretical physics to a degree that suggests, as Narayanamurti and Odumosu themselves concede, that the linear model “is at times correct” in “accounting” for a specific innovation.

How science and engineering interrelate remains intensely debated by intellectuals, policy makers, and practitioners. Deceptively simple questions continue to provoke anxieties over how and why technological change occurs and how such change can be accelerated or managed. Who should fund the science and engineering that leads to innovation? Who should own and distribute the fruits of this research? Do some innovations arise only from private investment and market forces, while others arise only from public funding and “use inspired” research? And though human values and social institutions are undeniably influenced by techno-scientific advance, do values and social forces simultaneously help to shape or construct the demand for specific innovations and innovations as artifacts themselves?

All of these questions arise from the central tension between the innovations people think they want and the innovations they actually get. Resolving this tension is no easy task, although there are models for generating more socially desired innovations. Narayanamurti and Odumosu invoke the many public goods produced by Bell Labs. They cite in particular the breakthrough work in the 1940s that created the transistor, which laid the basis for digital electronics and reflected deep collaboration among engineers and scientists, technicians and inventors.

Bell Labs was an outlier, of course, the beneficiary of both the largesse of its monopoly parent and explicit mandates from the federal government that the lab’s inventions in computing and information be widely and speedily disseminated to promote competition in emerging industries. Today, few industrial labs exist, and none of the high-fliers in the digital economy—companies such as Google, Facebook, Apple, and Amazon—maintain such centralized labs. Instead, these innovative companies ask their researchers, whatever their pedigree, to stick closely to product development and think deeply about what markets will support, not what science sustains. The authors don’t have an index entry for Google, but I surmise that they would approve of the company’s “moonshot” efforts in driverless cars, sensors, language translation, and virtual-reality glasses, for instance. Whereas such research would seem to lack immediate market applications, the work ought to benefit Google’s competitive position—and the health of society.

Narayanamurti and Odumosu are most persuasive when they argue that a new language is needed to describe and sustain innovation with public funds, and that even though the linear model is intellectually bankrupt, it is now a zombie framework that sows confusion and retards reform of publicly funded research. “Nomenclature is important,” they write. “We should immediately drop the use of the terms basic research and applied research and instead talk about ‘research’ with the understanding that it encompasses both invention and discovery.” In short, “development,” which the authors describe as “a scheduled activity with a well-defined outcome in a specified time frame,” must be viewed as part of a single research domain.

That the authors fail to build a persuasive case for their preferred conceptual understanding of research should not diminish the achievements of Cycles of Invention and Discovery. They do a great service by cogently arguing that the riddle of the politics of innovation is subject to rational analysis and that researchers, wherever they fall on the spectrum of science, engineering, discovery, and invention, must make good more often on their promise to deliver what humans say they want and need, and do so in an economic and expeditious way. Few can disagree with these high-minded aims.

Philosopher’s Corner: Genome Fidelity and the American Chestnut

A full-sized American chestnut was a sight to behold: a hundred feet in height, a trunk 10 feet in diameter, covered in white bracts of funky, acrid-smelling flowers. But on a walk through Appalachian forests today, you won’t find tasty chestnuts or the tree that bears them. Instead, all you’ll see are spindly knee-high twigs sporting chestnut leaves—sprouts from the diseased stumps and roots of yesterday’s massive trees. The American chestnut has been functionally extinct for nearly a century. A blight, imported from Asia, killed the trees en masse in the early decades of the twentieth century. Its demise had negative ecological effects—wiping out pollinators and crashing the populations of wildlife that depended on the nuts for food—as well as economic and cultural repercussions, for the tree had been a valuable source of timber as well as a symbol of Appalachian cultural identity and economic self-sufficiency.

Botanists and foresters have long dreamed of restoring the chestnut to its native range. Researchers have experimented with spreading a viral pathogen to infect the fungal blight to reduce its virulence. And in recent decades a program to hybridize American chestnuts by back-crossing them with the blight-resistant Chinese chestnut has finally found a degree of success. But now another research program is under way—to produce a genetically engineered blight-resistant tree. The American chestnut is a fast-growing hardwood tree, and if it could be reintroduced on a large scale, it would be ideal for reforesting Appalachian mountains ravaged by coal mining, for sequestering carbon as a response to climate change, and for increasing the resilience and diversity in forests where ashes, hemlock, beech, and other trees are also under attack from pest invasions.

This new research program for the chestnut illustrates the challenges to understanding the peril and potential of genetically modified organisms (GMOs). Some plant scientists and forest advocates hope to reintroduce American chestnut trees to their native range, and the genetically engineered tree stands a good chance of surpassing the hybrid ones in creating a high level of blight resistance—but only if the GMO is approved by the Environmental Protection Agency, the Department of Agriculture, and the Food and Drug Administration.

Should it be? Both sides can appeal to data, but at bottom the question is a philosophical one. Opposition to GMOs runs high. Generally, opponents offer three kinds of arguments: potential harm to health and environment, social and economic injustice, and the loss of natural integrity. The first of these is less relevant in the case of chestnut than in other GMOs that have been up for regulatory approval. That’s because, in comparison with other agricultural GMOs, the chestnut’s greater potential is to repair habitat rather than harm the environment. The second point, concerning the economic imperialism that has followed corporate control of GMO intellectual property, is a nonissue, since researchers have promised to make the engineered tree lines publicly available for forest restoration. So it’s the last of these—the philosophical issue, whether labeled as such or not—that’s likely to drive public debate.

What does it mean to claim that a GMO has lost its natural genetic integrity? What makes a genome natural or pure? Is there a measure of fidelity to nature? For that matter, why do we aim to be “faithful” to nature at all? One reason to protect genetic integrity is the claim that species are intrinsically valuable, and the unique genetic characteristics of a species make it what it is. Therefore, to protect the value of a species might require that we protect the purity of its genome. There could be validity to the view that the intrinsic value of a species derives from its genetic purity, or that the stability of a genome defines its essence, or that differences between species should be maintained; but these are metaphysical and normative commitments, not scientific statements. Of course, from a scientific point of view, species do change over time, and one species may split into two, or two distinct species may interbreed sufficiently to become one—as is common for plants that hybridize easily.

Indeed, from a scientific standpoint, there’s no clear measure of genetic purity. Restoring the American chestnut through genetic engineering adds about a dozen foreign genes to the 38,000 or so in its genome. Most of those come from Asian chestnut species, and one comes from wheat. On the other hand, the backcrossed hybrid differs from its American chestnut ancestor by about two thousand genes. Which is more true to nature? If the measure is only precision, then the GMO wins. However, if the degree of relatedness of gene source matters, then the inserted wheat gene, which is likely the one most responsible for defeating the attack of the blight, spoils its natural integrity.

Another reason to protect genetic integrity is the view that human-caused changes to a species’ genome are bad and should be avoided. Here, the chestnut GMO can surprise us, since it forces us to put the artificiality of selective breeding on a scale against the artificiality of genetic engineering. Both are ways that human labor has altered nature to further our interests. As it turns out, the Chinese chestnut used in the traditional breeding program is the product of centuries of artificial selection. It’s an orchard tree, shaped by generations of human will. Unlike the American chestnut, it is squat rather than towering, and the seeds are large and meaty, if somewhat flavorless. So which intervention is more potent and takes the species farther from its wild origins—domestication or genetic engineering? These are questions about how we choose to think about the natural, not about what science can tell us is natural.

Then there is the easy objection: if humans use genetic engineering to do whatever they desire, we could wind up with fantastical creatures, unpredictable environments, and strange diseases. But 25 years into our experiment with genetic engineering, the choice is not between rejecting it entirely or saying anything goes. The question here and now is always whether some specific use will reorder the tree of life in an unacceptable way.

There will be a demand for more science to study the safety of GMO chestnuts as food, and there will be worries about lack of genetic diversity in the engineered lines. These demands will be countered by evidence showing the human food sources of the genes are safe and with plans to continue to breed transgenic chestnuts with the few surviving American chestnut trees in order to increase diversity. The GMO chestnut very likely will pass the regulatory hurdles. But reforestation won’t happen unless the public is convinced to support planting hundreds of thousands of seedlings, and they won’t provide that level of support unless they can believe in the value of this tree.

The debate requires that we weigh metaphysical concerns about genetic purity with practical and ethical concerns about forest diversity. This could be the first intentional release of a GMO into the wild, not for a profit motive, but in order to reverse a human-caused extinction. It could lead to a run of new applications for using genetic technologies to help organisms adapt to human-induced environmental change, thereby rebuilding, not undermining, nature’s resilience. Most of all, it leads us to consider how controversy over emerging technologies can obscure larger risks and problems that aren’t amenable to technological solutions. The danger of species losses in our forests has sneaked up on us, and it’s likely that maintaining healthy forests will require not only the use of genetic technologies to modify tree species, but also to control the pests that are killing them. In essence, we can’t afford to miss the value of our forests by getting lost in debates about the trees.

Evelyn Brister is an associate professor of philosophy at the Rochester Institute of Technology.

Rethinking the Social Cost of Carbon Dioxide

The standard benefit-cost methodology that is used to calculate marginal costs of environmental regulations should not be used for long-lasting greenhouse gases.

There is a very big difference between carbon dioxide and conventional air pollutants. Many of the health and ecological effects of conventional pollutants become apparent in days or a few years. Once emissions cease, conventional pollutants disappear from the atmosphere in just hours or days. Hence it is reasonable to base regulatory policy on an estimate of the damage caused by the emission of an incremental amount of conventional air pollution—that is, on the “marginal damage.”

The same is not true for carbon dioxide. A substantial fraction of the carbon dioxide that enters the atmosphere remains there for centuries. Its effects via climate change become apparent only over decades to millennia, and at that point they cannot be reversed by stopping emissions. For this reason, using conventional assessments of marginal damage in benefit-cost analysis to support climate policy fails to consider how little we know about long-term effects of climate change and how these effects should be valued by today’s decision makers.

Nevertheless, a number of estimates of the dollar value of the climate change damages associated with the emission of an incremental ton in carbon dioxide emissions have now been made and can be labeled the “social cost of carbon dioxide” (SC-CO2).

The first serious consideration of using SC-CO2 by a government agency occurred over a decade ago in the United Kingdom when the Department for Environment, Food and Human Affairs commissioned a pair of studies. In 2006, determining SC-CO2 was one of the three strategies used by the Stern Review to evaluate the economics of climate change. Although interest in the concept of SC-CO2 has continued in academic circles, the British researchers Paul Watkiss and Chris Hope have reported that following the Stern Review and the adoption of binding targets for greenhouse gas (GHG) emissions, “the approach to carbon valuation in UK government underwent a major review.” Once emission targets were set, the UK government had no further need of SC-CO2 calculations to justify climate policies.

The United States does not have mandatory GHG emission targets. Since 2009, the federal government, under the direction of the Office of Management and Budget (OMB), has developed and refined official values for SC-CO2 to be used by government agencies in regulatory decision making. This attempt at rationalization of US policies related to climate change emerged from a legal challenge to a 2006 Final Rule that set Corporate Average Fuel Economy standards for light trucks for model years 2008-2011. In the proposed new standard, the National Highway Traffic Safety Administration discussed the rule’s likely effect on carbon dioxide emissions. The rule faced the legal challenge that it failed to monetize the benefits from reducing those climate effects and thus violated President Bill Clinton’s 1993 Executive Order 12866 on rulemaking, which, among other things, mandated the use of benefit-cost analysis. In 2008, the United States Court of Appeals for the Ninth Circuit ruled that the highway safety agency’s reasoning for not monetizing the benefits of mitigating emissions was arbitrary and capricious. As a consequence, various federal agencies started to comply with the court’s ruling by monetizing the costs (or benefits) associated with GHG emissions (or their mitigation) in different ways.

In an effort to impose consistency across agencies, an Interagency Working Group on the Social Cost of Carbon (IWG) was formed in 2009. The IWG was charged with producing an estimate of the marginal benefits of carbon dioxide mitigation. The group produced its first recommendations in 2010 and subsequently published updates in 2013, 2015, and 2016. The resulting SC-CO2 estimates are intended to provide a yardstick to assess whether government policies for mitigation of carbon dioxide emissions yield net benefits and allow for different alternatives to be ranked in terms of efficacy and effect. SC-CO2 values are also now widely used outside of government when analysts address technology and policy alternatives that influence the release of GHGs to the atmosphere.

In 2015, the IWG asked the US National Academies to review the SC-CO2 with the objective of guiding future revisions. In early 2017, the study committee released a detailed report that makes recommendations on the choice of models and damage functions, climate science modeling assumptions, socioeconomic and emissions scenarios, presentation of uncertainty, and temporal discounting. Shortly after the report’s release, President Trump signed on March 28, 2017, an executive order on “Promoting Energy Independence and Economic Growth,” which disbanded the IWG and withdrew all of its reports “as no longer representative of governmental policy.” We will return to these two developments later.

The machinery behind the curtain

Conceptually, the IWG computes the social cost of carbon dioxide by running an integrated assessment model (IAM) to assess the present value of the future monetized consequences of climate change. Present value is obtained by using a technique called exponential discounting. Then an additional ton of carbon dioxide is added, and the model is run again. The difference between the two present values is computed and taken to be the SC-CO2. Depending on the assumptions it made, the IWG has estimated values for the SC-CO2 that fall between a few tens of dollars per ton to over $100 per ton.

Four things are needed to compute the SC-CO2: a reasonable projection of how future global emissions of GHGs are likely to evolve; a model that estimates how those future emissions of GHGs will change the climate; a model of all the consequences of that climate; and a way to assign monetary values to all those consequences (at least partly so that qualitatively disparate damages may be combined).

Since the early 1990s many researchers have developed increasingly elaborate models of climate change, its dynamics, and its impacts. Some models have tried to integrate across all key elements, from demographics and economics through climate change and effects, in order to deliver a coherent, albeit less detailed, system for policy analysis. There is of course uncertainty about both how future GHG emissions and land use will evolve and how the climate will change as a result. There is even greater uncertainty about the consequences of these changes and how they should be valued. Thus, it is not surprising that Paul Watkiss and Thomas Downing, also a British researcher, reported in a 2008 review that estimates of SC-CO2 “span at least three orders of magnitude, reflecting uncertainties in choices of key parameters/variables.” In 2014, Robert Pindyck, an economist at the Massachusetts Institute of Technology, wrote: “IAMs are of little or no value in evaluating alternative climate change policies and estimating SCC [social cost of carbon]. On the contrary, an IAM-based analysis suggests a level of knowledge and precision that is non-existent, and allows the modeler to obtain almost any desired result.”

Meaningful quantitative valuation not possible

In the 1990s, two of us (Dowlatabadi and Morgan) led the development of one of the first integrated assessment models, called the Integrated Climate Assessment Model (ICAM). This model was designed with the express purpose of reflecting key uncertainties (in model structure, parametrization, and valuation) with internally coherent projections of drivers, dynamics, and impacts of and interventions for climate change mitigation, adaptation, and geoengineering. Our experience mirrored Pindyck’s conclusion that IAMs cannot produce quantitative estimates on which policy should be based. However, we believe well designed and internally consistent IAMs can produce useful qualitative insights about alternative climate policies. After a decade of work on ICAM, we chose to end further development for two reasons: we could not produce trajectories that were internally consistent within ICAM and also matched those produced by the Intergovernmental Panel on Climate Change (IPCC), and when we included structural uncertainties, it became possible to produce almost any outcome. We were also concerned that quantitative results from integrated assessment models such as ours were being used without an adequate discussion of the vast uncertainties. Unfortunately, false precision from IAMs is being used in the generation of quantitative “answers” that have come to serve as an inappropriate foundation for public policies.

As noted above, GHG concentrations are cumulative. How emissions will evolve in the future is unclear and will obviously depend on myriad social choices. In IPCC’s baseline scenario, the Earth is projected to run out of economically recoverable oil and gas by the 2050s, with coal returning as the dominant primary source of liquid and gaseous energies. However, renewable energy sources such as solar and wind power are now more economical than fossil energy in many parts of the world. In other parts, coal is being eschewed because of concern about air pollution. Hence, the range of likely future GHG emissions spans the gamut from the gloomy return to coal of the IPCC baseline to far lower figures.

We know that the response of the climate system to changes in radiative forcing (the heat energy added to the atmosphere as a result of increasing GHG concentrations) is nonlinear. Geologic evidence indicates that the Earth has several quasi-equilibrium climate states. The feedbacks that have blessed the planet with a stable “climate optimum” for the past ten thousand years are uncertain in magnitude and operate over limited perturbations. Beyond that range of perturbations climate system dynamics may tip to a very different climate state. Nobody can adequately assess the probability and consequences of such climate transitions. If and when such transitions occur, many resulting changes will not be marginal.

Even if we knew all the consequences of changing climate, the idea that one can find an optimal global policy makes little sense given the uneven distribution of costs and benefits around the world and among different stakeholders. Many of these changes will not be marginal in nature. Although side-payments are sometimes proposed, the practicality of such payments is based on the idea that the costs borne by the losers can be meaningfully monetized, the cost of compensating them adequately estimated, and the compensation actually paid. Even in the simple case of inundation through sea level rise, experience with displaced populations from places such as the Bikini Islands and Diego Garcia suggests that compensation of the “value of lost real estate” does not begin to make up for the loss experienced by the affected peoples. The inhabitants of these communities were moved previously during the Cold War. Their resultant high suicide rates, short life expectancy, and broken social structures make it clear that they have failed to “adapt” to their new locations, even after half a century. The problem grows only more complex when other damages are considered for valuation.

Climate change and its effects will vary by location, ecosystem, and socioeconomic context. The responses of social, economic, and ecological systems are also likely to be nonlinear, with some entering protracted periods of unstable chaos while others undergo rapid transition to conditions fundamentally, not marginally, different from today. We neither know how to characterize such effects or how they will be valued across different cultures, societies, and future generations. Indeed, monetizing, combining, and discounting these heterogeneous and contextual effects as a single global monetary metric displays a hubris that has been roundly condemned by ethicists and decision analysts.

As noted above, it is possible that the climate system and a number of social, political, economic, and ecological systems can undergo transitions to other states that are not reversible, at least on time-scales relevant to human affairs. Whereas some of the changes could be global or hemispheric in nature (such as dramatic shifts in the El Niño-Southern Oscillation, the Meridional Overturning Circulation, and the Indian monsoon), some will be quite local (such as a long-term change in circulatory patterns that makes local rain-fed agriculture possible or impossible in some regions). Tipping points related to effects likely also display a wide range of scales. The range of changed climate patterns and states, along with the range of changed effects, considerably complicates the issue of what constitutes catastrophic change. A change that is viewed as minor by some may be viewed as catastrophic by others.

As we have argued above, nonmarginal effects cannot be translated into marginal damage “costs.” The nonmarginal effects may be local, not in the market, incalculable, and not amenable to compensation. As such, the local damage function can be almost infinite. For example, an ecosystem may be eliminated or a traditional way of life that depends on an ecosystem may disappear. Impact studies incorporate such damages and evaluate them “at the margin,” then aggregate them to form a damage function used to calculate the SC-CO2. But these figures mask the inadequacy of financial compensation for the subjective damages being incurred.

Global damage functions may combine losses of amenities, such as higher air conditioning costs in the US Southwest (which may be large in economic terms but marginal in nature), with losses of natural or human patrimony (to which a small economic value may attributed, but which are nonmarginal and irreversible). None of the suggested approaches to equity weighting or discounting schemes address such nonmarginal damages.

The ability to choose appropriate policy given uncertainty in cost and benefits has been one of the greatest theoretical achievements of resource economics. When the marginal damages are much shallower than marginal costs of mitigation, it is appropriate to use price mechanisms, such as a carbon tax. When uncertain marginal damages are likely steeper than the marginal cost of mitigation, it is appropriate to cap emissions. This is what the United Kingdom (and the European Union) did following the Stern review. In the United States, the absence of GHG emission targets signals the philosophical divide across the Atlantic as well as the continued reliance on the SC-CO2.

There are two flaws to carbon pricing based on net marginal cost: the first is the assumption that the damages are marginal and shallow (clearly not so, given the above discussions), and the second is the assumption that mitigation is costly and steep. This, too, is not an assertion that is supported by evidence of energy supply choices of the past decade. Fossil fuels are being rejected for air pollution and energy security reasons. They are also facing stiff economic competition from renewables. In fact, with the strictest of emission caps, negative emissions can be achieved through the capture of carbon dioxide from the free atmosphere. As one can infer from the recent National Academies’ report Climate Intervention: Carbon Dioxide Removal and Reliable Sequestration, this option is available at a fixed, not rising, marginal cost within the range of calculated SC-CO2.

An alternative quantitative approach

For decades both Republican and Democratic administrations have issued executive orders requiring quantitative benefit-cost assessments of major federal regulations. The Interagency Working Group on the Social Cost of Carbon used a complex process to do this (that the National Academies’ recommendations would make even more complex)—and it did produce numbers. But we do not believe these numbers are meaningful, even as they have given agencies values they can plug into their benefit-cost analyses.

The executive orders requiring benefit-cost assessments contain language that says if it is not feasible or appropriate to quantify costs, other approaches can be adopted. However, neither federal agencies nor the courts have demonstrated much willingness to adopt such alternatives, even for assessing civil rights laws such as the Americans with Disabilities Act. Although it is tempting to say that in addressing climate change, OMB should abandon the search for dollar values and adopt some other strategy, such a proposal is not likely to succeed, and in today’s political climate it could further contribute to retrograde policy developments.

In place of using the SC-CO2, we believe that a more defensible method can be based on identifying and avoiding climate change thresholds: temperatures or GHG concentration levels at which damages are likely to become unacceptable. Such an approach meets conditions that damage estimates cannot. The great advantages of such an approach are that the costs involved in achieving different emissions reduction levels are in the marketplace and in metrics that are universally accepted; the costs of emission reductions can be covered through side payments or technology transfers or both; and, since the goal is to transition to an entirely new energy system, the marginal costs may even start falling as the policy progresses. In such systems, richer countries can act as early adopters and drive down the cost of technology, allowing later adopters to make the transition at a lower cost. Early adopters can even see themselves as part of a social movement and view their expenditures not as a cost but as an expression of their commitment to civic responsibility.

In place of developing policies based on a SC-CO2, the European Union has adopted a strategy of setting a cap on member nations’ emissions of carbon dioxide and other greenhouse gases. As outlined below, once such a cap has been established, it is perfectly feasible to back out a dollar cost for eliminating each incremental addition of carbon dioxide or other GHG to the atmosphere.

In the climate negotiations in Paris, which led to the Paris Agreement, most of the world’s political decision makers reached the conclusion that the consequences of global temperature change above 2 degrees Centigrade (2°C) were unacceptable. Hence, their pledge to limit climate change to 2°C (3.6 degrees Fahrenheit) or less. (It must be noted, however, that on June 1, 2017, President Trump announced that he planned to withdraw the United States from the climate accord.) In a series of studies, the Norwegian climate researcher Glen Peters and his colleagues have estimated how much more carbon dioxide can be added to the atmosphere before the average temperature of the planet rises by 2°C. A similar calculation of remaining “atmospheric capacity” can be done for any temperature increase. Although such a calculation involves uncertainty, it has a much narrower range of uncertainty and is more defensible than SC-CO2 calculations.

Independent of how the remaining atmospheric capacity is allocated among emitting parties, if the planet is going to hold warming below catastrophic levels, the United States, the European Union, China, and all other major emitters will need to reduce their emissions of long-lived GHGs by 80-90% in the next two or three decades. Although two or three decades is a very long time for many firms making investment decisions in a market economy, it is almost instantaneous in terms of institutional change and the turnover of long-lived physical infrastructure. This means that the prospects of holding the amount of warming below a level of 2°C looks increasingly unlikely.

Writing in Nature in 2009, the British climate scientist Miles Allen and his colleagues have observed that “either we specify a temperature or concentration target and accept substantial uncertainty in the emissions required to achieve it or we specify emissions and accept even more uncertainty in the temperature response.” One of us (Dowlatabadi) has argued that a target based on atmospheric GHG concentration involves less uncertainty and is more easily implemented.

But these are details. Either way, the path is the same: set some target; estimate an “emissions reduction supply curve”; and from that estimate either an ultimate cost to achieve all of the needed reduction or a per-ton cost that evolves rapidly over time as the world transitions away from a fossil economy. By an emissions reduction supply curve, we mean a plot of cost as a function of the amount of emission reduction. Such a curve starts out negative (there are ways to reduce some emissions while also saving money—for example, with improved energy efficiency and conservation) and then rises, at least for a while, as deeper reductions are required. Over time, technological innovation and managerial experience might slow the rise in cost or even eliminate it.

If the target is specified as staying below some specific average increase in average global temperature, we will also need a plot of how temperature change will be related to emissions (call it the “warming curve”). Then, combining the emission reduction supply curve with the warming curve will allow one to compute the needed amount of reduction in emissions and hence an average cost per ton of carbon dioxide to stay below a certain temperature change. Both these curves involve some uncertainty, so the resulting cost would actually be a probability distribution. The Office of Management and Budget could use that distribution in its benefit-cost analyses, or it could specify how risk averse it wants to be and choose a single cost point on that distribution.

An alternative way to specify a target is as a desired emission trajectory over time—the kinds of curves that the Intergovernmental Panel on Climate Change and many others in the climate community have produced in abundance. Having chosen such a trajectory, one can use the emission-reduction supply curve to compute a cost per ton (roughly the equivalent of an emission tax) that would be required to stay on that curve. Of course, if the carbon dioxide that is already in the atmosphere were allocated to the different nations that have emitted it and the level of economic development that each has achieved is considered, an argument could be made that over the next several decades different blocs of nations should undertake different amounts of reduction as a function of time.

Such strategies do not involve an estimate of the marginal damage arising from each emitted ton of carbon dioxide. Rather than applying future discounting, which makes even the most catastrophic outcome appear small if it falls far enough in the future, they would simply depend on a scientifically informed normative judgment that there is a point beyond which more climate change runs too high a chance of producing catastrophic damage to the planet’s peoples and ecosystems. There are, of course, uncertainties associated with such approaches, but we stress that such frameworks involve far less uncertainty than the SC-CO2 approach, since they do not require the series of assumptions that must be made to coerce a range of disparate damages into a single global monetary metric.

Clearly it is essential that research, development, and deployment for energy technologies be continued to drive down the cost of low- and zero-carbon energy technologies. For the next few years the US Department of Energy may reduce its support for such work, but many states, as well as other nations, will push forward, as will firms that adopt a longer view of likely future market demand. If costs of low-carbon energy technology continue to fall, the net benefits provided by such technologies will increase.

National Academies v. Trump

In a remarkable display of confidence that further refinement of models and methods will make it possible to meaningfully forecast and quantify the consequences of future climate change, the National Academies’ report on the social costs of carbon dioxide endorses the IWG’s basic approach. It recommends that rather than use existing integrated assessment models, a new and improved IAM should be constructed. It argues for “the creation of an integrated modular SC-CO2 framework that provides transparent articulation of the inputs, outputs, uncertainties, and linkages among the different steps of SC-CO2 estimation.” It calls for an improved treatment of “interactions and feedbacks among the modules of the SC-CO2 framework if they are found to significantly affect SC-CO2 estimates” and argues to extend assessment “far enough in the future to provide inputs for estimation of the vast majority of discounted climate damages.” Last, it calls for greater use of statistical techniques and greater use of expert elicitation to quantify key uncertainties.

Given President Trump’s recent Executive Order disbanding the IWG, the US government is not likely to undertake such an effort in the next few years, nor is it likely to actively support efforts to reduce GHG emissions. However, there are others talking about using private funding to continue work in refining the SC-CO2 framework. Given the many needs facing the US climate research communities, we do not believe that it makes sense to invest scarce funds in further refining the SC-CO2. Yet even as other approaches than the SC-CO2 should be pursued to guide policy, the present inaction by the US federal government is a grave mistake. The window to limit warming to anything like 2°C is rapidly closing.

Although President Clinton’s Executive Order 12866 required that “agencies should assess all costs and benefits,” it does recognize that “some costs and benefits are difficult to quantify.” In such cases, it requires that agencies act based “upon a reasoned determination that the benefits of the intended regulation justify its costs.” Congress has not enshrined the current SC-CO2 approach in statute. The requirement that some monetary value be assigned to greenhouse gas emissions stems from the Ninth Circuit Court of Appeals’ decision, which does not specify a method by which this value should be arrived at, asserting only that “the value of carbon emissions reduction is certainly not zero.” The courts are therefore likely to give the executive branch, including agencies, considerable latitude in determining how to implement Executive Order 12866.

As an alternative to further refining the SC-CO2 framework, we believe a group of private foundations, corporations, and others—perhaps in collaboration with several supportive state governments (and the federal government if in the future it could be persuaded to participate)—should undertake a serious cooperative effort in policy-focused research designed to develop cost estimates derived from the creation of a cap on future warming or future US greenhouse gas emissions. Choosing caps is inherently normative. Hence, as a second phase of such an effort, we believe that a high-level national commission should be assembled, made up of thoughtful citizens including but not limited to climate scientists, economists, ecologists, technology experts, and ethical leaders. Building on the policy analytic work just outlined, this commission should be charged with reviewing the scientific evidence on the consequences of climate change and the costs of emission controls, as well as with developing recommendations for the choice of caps for the United States. We believe that the courts could be persuaded that policies that emerge from such an exercise are based on “a reasoned determination that the benefits of the intended regulation justif[ies] its costs.”

There is an urgent need to take serious action now to reduce emissions of carbon dioxide and other greenhouse gases. At the federal level, the United States may not make much progress on reducing its emissions of carbon dioxide in the next few years, but we should not let those years be wasted. Many states and cities are taking action now. Within a few years the federal government (and the Office of Management and Budget) may once again become serious about controlling emissions. When that happens, we should have already laid the foundations for a system that OMB and others can use to drive emissions reductions that is more defensible than the SC-CO2 framework. Emitting a ton of carbon dioxide to the atmosphere causes damage. We may not be able to defensibly monetize the damage done by each ton of emissions, but the evidence is clear that it is high and growing higher with each passing year.

M. Granger Morgan, Parth Vaishnav, and Inês L. Azevedo are on the faculty of the Department of Engineering and Public Policy at Carnegie Mellon University. Hadi Dowlatabadi is on the faculty of the Institute for Resources Environment and Sustainability at the University of British Columbia in Vancouver, Canada.

Recommended reading

Interagency Working Group on Social Cost of Carbon, Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866 (2016).

Report of the Committee on Assessing Approaches to Updating the Social Cost of Carbon, Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide, Board on Environmental Change and Society (Washington, DC: National Academies Press, 2017).

Presidential Executive Order on Promoting Energy Independence and Economic Growth (March 28, 2017).

National Research Council, Climate Intervention: Carbon Dioxide Removal and Reliable Sequestration (Washington, DC: National Academies Press, 2015).

G. P. Peters, R. M. Andrew, S. Solomon and P. Friedlingstein, “Measuring a fair and ambitious climate agreement using cumulative emissions,” Environmental Research Letters 10, no. 10 (2015).

H. Dowlatabadi, “Bumping against a gas ceiling,” Climatic Change 46, no. 3 (2000): 391-407.

The Age of Weaponized Narrative, or, Where Have You Gone, Walter Cronkite?

When I was in college many years ago, the concept of “narrative” was simple: it was a story told by a literary character, or, more broadly, the story itself. Starting with the French literary theorist and philosopher Roland Barthes and others in the 1970s, however, narrative was turned into a far more complex idea, as social scientists and humanists began to appreciate that stories structured reality, created and maintained identity, and provided meaning to people, institutions, and cultures. Political organizers, activists, and others learned to use narratives of oppression and marginalization to attack dominant cultural narratives of elites, while companies learned to generate narratives that supported their brands. Eventually, nations began to see narrative as a tool of foreign policy that they could use to undermine their enemies: weaponized narrative.

The easiest way to see how narrative works is to look at popular advertising. Pepsi, for example, once urged young counterculturalists to “Come alive! You’re in the Pepsi Generation!” As the media expert Tim Wu noted: “Pepsi, of course, did not create the desire for liberation in various matters from music to sex to manners and dress. Rather, it had cleverly identified with a fashionable individualism …. For ultimately what the Pepsi Generation were consuming wasn’t so much cola as an image of themselves.” But archrival Coca-Cola was no slouch at narrative, either: as The Economist notes, “it was Coca-Cola that popularized the image of Santa in the 20th century.”

These bubbly examples illustrate in a simple way several of the underlying principles that guide the way narrative is understood and deployed today. First, narrative is a highly adaptable strategy that can be applied in a wide variety of contexts—from soft drinks to soft power. Second, as with any tool applied to achieve a competitive edge, those who seek to wield narrative in contested settings are quick to adopt new knowledge that can improve performance—in this case, quickly and effortlessly incorporating new research or findings even from academic fields such as neuroscience, evolutionary psychology, and behavioral economics. Third, narratives become strategically useful when they are not just stories, but when they draw on or create the frameworks from which societies, cultures, and individuals derive their identity and thus meaning—as consumers, as political actors, as individuals, as citizens. And finally, narrative is power: it is a vehicle for manipulating individuals so that they are more inclined to do what you want, not because you have forced them to, but because you have convinced them that they want to do what you want them to.

Consider: On May 13, 2017, a small group of alt-right protesters led by the white supremacist Richard Spencer gathered in Charlottesville, Virginia, to protest a decision to move a statue of Confederate Gen. Robert E. Lee. Among the crowd’s chants were, “Russia is our friend!” This might seem absurdly irrelevant but is actually a measure of the success of the campaign that Russia has waged for several years to develop a favorable narrative among the global alt-right.

A second example involves the political consulting group Cambridge Analytica, a big data mining and analytics firm that among other jobs worked on President Trump’s election campaign; similar firms worked on the Brexit campaign in the United Kingdom. Based on the enormous amounts of data that can be accumulated on each voter, some people claim that such firms have the ability to target select voters with customized individual narratives based on their personal data profiles in order to manipulate their political choices and their decision whether to vote or stay home. Experts disagree on whether these techniques were decisive in the Brexit vote or the US election, but that is beside the point. Technological evolution, in this case involving big data and analytics fed by social media and online data aggregation techniques, is rapidly developing the ability to custom-design narratives that can effectively manipulate political behavior on an individual basis. If it isn’t already here, it will be soon.

Story is power

Weaponized narrative is the use of information and communication technologies, services, and tools to create and spread stories intended to subvert and undermine an adversary’s institutions, identity, and civilization, and it operates by sowing and exacerbating complexity, confusion, and political and social schisms. It is an emerging domain of asymmetric warfare that attacks the shared beliefs and values that support an adversary’s culture and resiliency. It builds on previous practices, including disinformation, information warfare, psychological operations (psyops), fake news, social media, software bots, propaganda, and other practices and tools, and it draws on advances in fields such as evolutionary psychology, behavioral economics, cognitive science, and modern marketing and media studies, as well as on technological advances in domains such as social media and artificial intelligence.

Given the nascent state of the art and the rapid evolution of the relevant science, technology, and geopolitical and cultural trends, our definition is necessarily vague, but it does enable clarification of a few important points. First, commercial and nongeopolitical narratives are generally excluded, although of course the insights from such domains can be rapidly integrated into weaponized narratives. Second, narratives intended for internal audiences, either to consolidate or maintain power, are excluded. The Nazi Germany and Soviet examples of the Big Lie, or modern examples such as the narratives of Mother Russia and religious orthodoxy supporting Russian president Vladimir Putin’s regime, are thus excluded. Narratives often serve multiple purposes, however. For example, the Russian narratives deployed in Eastern Ukraine, including the idea of Russia as a Eurasian empire, Ukraine as an integral part of greater Russia (often labeled “Novorossiya”), and the rebuilding of an Eastern Orthodox/Mother Russia power, were intended both to facilitate the invasion of Ukraine and Crimea, a weaponized narrative deployment that fits within our definition, and to support internal Russian narratives of the resurgence of Russia as a respected world power, which falls outside of our definition.

Weaponized narrative operates at both the tactical and strategic levels. At the tactical level, the goal could be to debilitate potential adversaries without resorting to conventional kinetic warfare. At the strategic level, weaponized narrative is a major means by which otherwise powerful adversaries can be weakened over time so that their ability to interfere with the attacking entity’s plans and interests is reduced or eliminated. Russia’s use of weaponized narrative as part of an integrated Ukrainian invasion is an example of the first; Russia’s broad interference in US and European elections in a long-term effort to weaken and divide the West is an example of the latter.

Weaponized narrative is facilitated by a diverse kit of tools and techniques. Some of these, such as character assassination, creation of fake news outlets (“sockpuppet websites”), and planting false stories, are the traditional stuff of propaganda and disinformation campaigns, but can be much more effective given today’s information technologies; others, such as waves of social media spreading false memes at lightning speed through botnets, are new. Each confrontation or campaign is unique and will thus call forth a different mix of techniques and tools.

Nonetheless, it is possible even at this preliminary point to differentiate between the tactics and methods that are a part of weaponized narrative and its strategic deployment. On the tactical side are such tools as “troll farms” that disrupt online communities by sowing racial, social, and ethnic tension in target societies; timed and selective release of stolen internal documents and e-mails to influence an election; designer narrative packages enabled by data mining and big data techniques targeted at individuals; or activities and campaigns intended to weaken reliable media in target countries. In contrast, an example of the strategic deployment of weaponized narrative using varied and shifting social, cultural, ethnic, and disinformation tools might be the long-term suborning of Baltic and Eastern European states by Russia.

The domain of weaponized narrative is not yet stable or predictable; rather, it is in a period of wild experimentation. For example, the self-proclaimed Islamic State (ISIS) has used modern, but not breakthrough, media techniques to develop a message that appeals to alienated Islamic youth, one of their target markets. Russia developed its weaponized narrative capabilities, as it did its internal narratives of Mother Russia and the Eurasian Empire, by rapid prototyping, testing, and revision. Russia probably did not expect its weaponized narrative campaign deployed during the recent US election to actually elect Donald Trump. But it likely regarded as a victory anything that weakened the moral authority and soft power of the United States and correspondingly could be positioned as validating the soft authoritarianism of the Putin government and the global importance of the Russian state.

Weaponized narrative is an ideal asymmetric strategy for adversaries of the United States that find themselves unable to compete in conventional warfare. It enables projection of power without significant risk of triggering conventional military responses; it favors offense over defense, as many cyber-based weapons do; it is inexpensive. It is particularly useful for a country such as Russia, with a weak petro-state economy, to use against the United States and Europe; moreover, because of Russia’s Marxist and Soviet history, disinformation and information warfare techniques are part of the state’s experiential DNA, which means it has a strong base in relevant experience on which to build the new capabilities that enable weaponized warfare. Cyberweapons such as bot armies, troll factories, and deceptive sockpuppet websites are far cheaper than traditional munitions. Moreover, success doesn’t require constructing a coherent counternarrative; it’s sufficient to cast doubt on existing narratives and attack existing institutions such as the media or security agencies. And the increasing political and social fragmentation in many European countries and the United States only makes this easier, as it enables a sophisticated attacker to nudge groups to respond in ways that they take to be patriotic and self-evident, but that are the result of deliberate manipulation. Witness the demonstrators in Charlottesville shouting “Russia is our friend.”

Not the same old disinformation

Although there’s a goodly amount of traditional information warfare deployed in today’s conflicts, current trends suggest that weaponized narrative is arising during a unique historical shift that makes it particularly effective as a weapon of choice against otherwise conventionally well-armed adversaries. In the long run, in fact, the United States may be uniquely vulnerable. To understand this, consider some of the relevant trends and their implications, which taken together make it likely that the changes enabling weaponized narrative are fundamental rather than either episodic or matters of scale.

Begin with the observation that individuals, their institutions, and their societies and cultures may be many things, but one thing they all are is information-processing mechanisms. Change the information environment dramatically, and you change how societies function. Accordingly, perhaps the most important trend pertinent to the rise of weaponized narrative is the dramatic increase in volume, velocity, and variety of information to which virtually every person around the globe is exposed. In 2014, for example, the marketing communications expert Susan Gunelius found that every minute Facebook users shared nearly 2.5 million pieces of content; Twitter users tweeted nearly 300,000 times; Instagram users posted nearly 220,000 new photos; YouTube users uploaded 72 hours of new video content; Apple users downloaded nearly 50,000 apps; e-mail users sent over 200 million messages; and Amazon generated over $80,000 in online sales.

And that was three years ago. This growing stream of information is increasingly augmented by tools such as bot armies, targeted news and designed facts, and social media structures that don’t just network people into like-minded bubbles, but create an environment where everyone has an enhanced opportunity to seek out, select, and align facts and information to support the community narrative that they find most appealing—and a reduced need or incentive to integrate, or even be aware of, other ways of organizing knowledge and information into coherent narratives. As the surrounding information environment continues to grow in complexity, the results can include social fragmentation; substitution of moral condemnation for reasoned argument; increased fundamentalism as individuals retreat from complexity into strong, familiar, identity-supporting narratives; the rise of ring-fenced communities that reject the legitimacy of any who oppose them; and golden opportunities for adversaries who wish to use weaponized narrative not to conquer but to weaken and fragment—and to legitimize their own internal narratives by contrast.

Meanwhile, geopolitical shifts resonate with changes in the information environment to encourage further retreat to fundamentalism and institutional failure. For example, after World War II few questioned the ethical principles of the victors, especially the United States, which were consequently enshrined in the United Nations’ Universal Declaration of Human Rights in 1948. But the “universal values” appearing in that document have turned out to be not so universal after all: Russia, China, and a number of Islamic entities now reject them. China, for example, in a 2013 policy report titled “Document 9: Communique on the Current State of the Ideological Sphere,” called Western constitutional democracy “an attempt to undermine the current leadership and the socialism with Chinese characteristics system of governance” and asserted that promoting Western “universal values” is “an attempt to weaken the theoretical foundations of the Party’s leadership.” ISIS and jihadist Islam reject any secular form of government, including the nation-state, which does not reflect their interpretation of scripture. Institutionally, private military companies, large multinationals, and nongovernmental organizations of all stripes increasingly function as independent power centers.

One result is that large areas of the world, especially in sub-Saharan Africa and the Middle East, increasingly lapse into what the foreign policy expert Sean McFate calls “durable disorder,” a neomedieval devil’s brew of religions, ideologies, clans, governments, armed activists, and various internal and external powers. In short, individual commitment to larger state and social identities is weakening. The state-based Westphalian system of international law and institutions, although still dominant in many ways, is failing, and it is being replaced by a complex pastiche of private, public, non- and quasi-governmental, and ad hoc institutions, power centers, and interests. Geopolitics is growing ever more complex even as the societies and institutions that must manage them are retreating into more simplistic worldviews and narratives. Each outbreak of fundamentalism or nativistic nationalism reflects its own idiosyncratic environment, yet the tides are global and inclusive.

Another geopolitical trend of importance is the development of new strategies by potential adversaries in response to US dominance of conventional military capabilities. Russia and China in particular have emphasized a shift to asymmetric warfare, with strategies that extend the zone of warfare far beyond traditional combat to engage across cultures and civilizations as a whole. Thus, China has adopted “unrestricted warfare,” and Russia “hybrid warfare”; in both cases, weaponized narrative becomes an explicit part of acceptable strategy, and one that can be deployed in the absence of any traditional war.

Again, such formulations are not completely sui generis. The Cold War and various insurgencies have included cultural and ideological as well as military confrontation, and much of the Cold War was fought in what might be called a demilitarized zone of competing cultural narrative (Western imperialism versus the domino theory), technological competition (such as the space race), and client states. Nonetheless, especially given the new tools and weapons that cyber and artificial intelligence/big data/analytics technologies make possible, the implications of large and well-organized states redefining conflict to include entire cultural, financial, and political landscapes as battlespaces are profound. China, for example, has used financial attacks to sap the strength of adversaries, and Russia—a media-savvy, morally relativist state par excellence—is rapidly developing significant expertise in weaponized narrative that enables it to use modern media, disinformation techniques, and information and communication technologies in ways that would not trigger a conventional military response.

These trends strongly suggest that the global erosion of the power of a few ruling narratives, protected by the power of a small number of dominant states or cultures, will continue. Qualitative changes in information environments and technologies are accelerating, not slowing. Rather than a return to simple, strong national and cultural narratives, current patterns and information structures suggest that it is more likely that the future will see multiple competing narratives at all scales—what might be termed “narrative neomedievalism”—as the norm.

A new exceptionalism?

Meeting the challenges of weaponized narrative involves two separate tracks. The first, the operational, short-term track, requires an assessment of the challenge to US society and institutions posed by the current situation. As NATO analyst Keir Giles warns, “recent Russian activities in the information domain would indicate that Russia already considers itself to be in a state of war.” The sense of urgency such an observation implies is still somewhat lacking in Europe and the United States. Moreover, it is not just the offensive but the defensive responses that need attention, a particular problem since defending against weaponized narrative is more difficult and complex than mounting an offense. But even in the near term, the challenges are significant. Weaponized narrative combined with hybrid or unrestricted warfare strategies is not just a military threat; its targets and theaters of operation cut across all aspects of society, from finance to infrastructure to personal information. Wikileaks, internal media, Cambridge Analytica, theft of personal data, integration of criminal and state cyberespionage assets, bot armies supporting alt-right twitter feed and websites, media spoofs, and sockpuppet sites are all nonmilitary, and most engage private firms and infrastructure. That’s part of why the West doesn’t understand weaponized narrative and is having a hard time responding: it jumps legal and operational domains, especially the Constitutional divide between civilian and military functions, and the equally strong differentiation between the private and public spheres.

The longer-term track is existential. Since the Cold War, neither Russia nor China nor any other entity has had the traditional military capability to overpower the United States. Rather, the danger is that the cultural, intellectual, and institutional assumptions and frameworks on which the United States and Europe are based are becoming obsolete; in this sense, weaponized narrative is simply one indicator, albeit an important one, of this process. The United States especially faces a unique challenge because it is the world’s leading Enlightenment power, founded on the principles of applied rationality, balance of power, and individual rights voiced by philosophers such as Voltaire, Locke, and Montesquieu. The founding fathers of the US experiment were deeply influenced by and committed to Enlightenment thought. Rule of law, separation of military and civil spheres, and an emphasis on the primacy of an informed, educated citizen are hallmarks of Enlightenment governance. If it is the case, then, that trends such as qualitatively different information environments are resulting in citizens and voters who increasingly locate themselves outside the dominant cultural narratives, changes that are in turn enabling manipulation of governance systems outside of the legal and institutional structures, then the challenge may indeed be to the very survival of the post-World War II Western world order.

Thus, even as incremental and immediate responses to cyberattacks and disinformation campaigns are required, the real challenge is to establish cultural practices and government institutions that are consistent with Enlightenment principles and at the same time adapted to a rapidly evolving information environment. Such adaptation is doable, but it requires a clear vision of how individual psychology, institutional competency, and cultural structure are being affected by information technology. Developing and supporting mainstream media can be an important counter to the alternate facts that support confusion, and thus vulnerability, in target societies. We will never return to the media environment of the twentieth century, where an individual such as the newscaster Walter Cronkite could be an almost universally trusted source of information, but it is nevertheless important in the near term to restore faith in quality journalism.

Addressing the deeper, longer-term threat requires, first, that we understand weaponized narrative. In the face of a set of new weapons and new strategies, it would be foolish in the extreme to simply continue business as usual, either conceptually or institutionally. Remembering that it took years before analysts developed a stable strategic framework for managing nuclear weapons (or steel-hulled ships, or gunpowder, or metal stirrups), we should not expect this understanding to be achieved easily or without cost.

Second, the source of US power has historically not been just economic or military. Rather, it has been the soft power of the American Dream, the attractiveness of a culture that within its clear and explicit laws lets you be whatever you wish and accomplish what you can. The energy, the optimism, and the simplicity of such soft power, underlain by a trust in US institutions and their essential goodness, have been fading since the Vietnam War. No great power stays great without its exceptionalist narrative, and the US narrative needs rebooting. Persistent problems such as lack of economic mobility, smoldering racial tensions, and intolerance of immigrants cannot be ignored. A new US exceptionalism, one that fits a far more complex world and prepares citizens for living and working in periods of unprecedented technological and concomitant social and economic change, is required. In short, if the Shining City on the Hill is to remain a beacon, its unifying narrative must be revived.

But it cannot be simply an exercise in historical restoration. It must be updated for a new cultural and technological age. Old assumptions have been overthrown, and as Marx famously noted in the Communist Manifesto, all that is solid melts into air. The immediate assaults of weaponized narrative must be countered now, but the fundamental challenge is for the United States to create the institutions and the culture that can perform ethically, responsibly, and rationally in a transformed world, just as the nation’s founders did centuries ago.

From this perspective, the nation’s comparative advantage is unlikely to lie at the national level, where politics, in part reflecting the effects of weaponized narrative, is degraded and ineffective. Instead, the nation should look toward bolstering its historical commitments to decentralized governance and power, in particular the agility and adaptability of state and city governments and of private firms, which are better equipped to react to rapid and unpredictable change in ways that enhance US soft power and its attractiveness to audiences around the world. Such civic experimentation turns the strength of US pluralism toward the recognition and regeneration of common interests and a common future, and thus demonstrates once again for all citizens the power of shared narrative.

Should Artificial Intelligence Be Regulated?

New technologies often spur public anxiety, but the intensity of concern about the implications of advances in artificial intelligence (AI) is particularly noteworthy. Several respected scholars and technology leaders warn that AI is on the path to turning robots into a master class that will subjugate humanity, if not destroy it. Others fear that AI is enabling governments to mass produce autonomous weapons—“killing machines”—that will choose their own targets, including innocent civilians. Renowned economists point out that AI, unlike previous technologies, is destroying many more jobs than it creates, leading to major economic disruptions.

There seems to be widespread agreement that AI growth is accelerating. After waves of hype followed by disappointment, computers have now defeated chess, Jeopardy, Go, and poker champions. Policymakers and the public are impressed by driverless cars that have already traveled several million miles. Calls from scholars and public intellectuals for imposing government regulations on AI research and development (R&D) are gaining traction. Although AI developments undoubtedly deserve attention, we must be careful to avoid applying too broad a brush. We agree with the findings of a study panel organized as part of Stanford University’s One Hundred Year Study of Artificial Intelligence: “The Study Panel’s consensus is that attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

One well-known definition is: “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” A popular understanding of AI is that it will enable a computer to think like a person. The famous Turing test holds that AI is achieved when a person is unable to determine whether a response to a question he or she asked was made by a person or a computer. Others use the term to refer to the computers that use algorithms to process large amounts of information and draw conclusions and learn from their experiences.

AI is believed by some to be on its way to producing intelligent machines that will be far more capable than human beings. After reaching this point of “technological singularity,” computers will continue to advance and give birth to rapid technological progress that will result in dramatic and unpredictable changes for humanity. Some observers predict that the singularity could occur as soon as 2030.

One might dismiss these ideas as the provenance of science fiction, were it not for the fact that these concerns are shared by several highly respected scholars and tech leaders. An Oxford University team warned: “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime)…the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.” Elon Musk, the founder of Tesla, tweeted that: “We need to be super careful with AI. Potentially more dangerous than nukes.” He added: “I’m increasingly inclined to think there should be some regulatory oversight [of AI], maybe at the national and international level.” Oxford philosopher Nick Bostrom believes that just as humans out-competed and almost completely eliminated gorillas, AI will outpace human development and ultimately dominate.

Attorney and legal scholar Matthew Scherer calls for an Artificial Intelligence Development Act and the creation of a government agency to certify AI programs’ safety. The White House organized four workshops on AI in 2016. One of the main topics: does AI need to be regulated?

The AI community has not been indifferent to these concerns. In 2009, the president of the Association for the Advancement of Artificial Intelligence appointed a panel of leading members to examine “the value of formulating guidelines for guiding research and of creating policies that might constrain or bias the behaviors of autonomous and semi-autonomous systems so as to address concerns.” Some called for a pause, but in the end the AI researchers decided that there was not yet any reason for concern or to halt research.

As we see it, the fact that AI makes machines much smarter and more capable does not make them fully autonomous. We are accustomed to thinking that if a person is granted more autonomy—inmates released from jails, teenagers left unsupervised—they may do wrong because they will follow their previously restrained desires. In contrast, machines equipped with AI, however smart they may become, have no goals or motivations of their own. It is hard to see, for instance, why driverless cars would unite to march on Washington. And even if an AI program came up with the most persuasive political slogan ever created, why would this program nominate an AI-equipped computer as the nominee for the next president? Science fiction writers might come up with ways intelligence can be turned into motivation, but for now, such notions probably should stay where they belong: in the movies.

One must further note that regulating AI on an international level is a highly challenging task, as the AI R&D genie has already left the bottle. AI work is carried out in many countries, by large numbers of government employees, business people, and academics. It is used in a great variety and number of machines, from passenger planes to search engines, from industrial robots to virtual nursing aids.

Most important, one must take into account that restrictions on the development of AI as a field are likely to impose very high human and economic costs. AI programs already help detect cancer, reduce the risk of airplane collisions, and are implemented into old-fashioned (that is, nonautonomous) cars’ software that makes them much safer.

In a study in which a robot and human surgeons were given the same task (to sew up part of an intestine that had been cut), the robot outperformed the humans. Although the surgeons did step in to assist the Smart Tissue Autonomous Robot in 40% of the trials, the robot completed the task without any human intervention 60% of the time, and the quality of its stiches was superior.

AI is used in search and rescue missions. Here algorithms are used to survey aerial footage of disaster zones to identify quickly where people are likely to be stranded, and the increased speed means that there is a better chance that the victims will be found alive.

AI-equipped robots are used in child, elder, and patient care. For example, there are robotic “pets” used to reduce stress for elderly patients with dementia. The pets are programmed to learn how to behave differently with each patient through positive and negative feedback from the patients. AI is also used in the development of virtual psychotherapists. People appear more willing to share information in a computer interview because they do not feel judged the same way they might in the presence of a person.

Computerized personal assistants such as Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa use AI to learn from their users’ behavior how to better serve them. AI is used by all major credit card companies in fraud detection programs. Security systems use AI programs to surveil multiple screens from security cameras and detect items that a human guard often misses.

One must weigh losses in all these areas and in many others if AI research were to be hindered as part of hedging against singularity. It follows that although there may be some reasons to vigilantly watch for signs that AI is running amok, for now, the threat of singularity is best left to deliberations during conferences and workshops. Singularity is still too speculative to be a reason at this time to impose governmental or even self-imposed controls to limit or slow down development of AI across the board.

Autonomous killing machines?

In contrast, suggestions to limit some very specific applications of AI seem to merit much closer examination and action. A major case in point is the development of autonomous weapons that employ AI to decide when to fire, with how much force to apply, and on what targets.

Suggestions to limit some very specific applications of AI seem to merit much closer examination and action. A major case in point is the development of autonomous weapons.

A group of robotics and AI researchers, joined by public intellectuals and activists, signed an open letter that was presented at the 2015 International Conference on Artificial Intelligence, calling for the United Nations to ban the further development of weaponized AI that could operate “beyond meaningful human control.” The letter has over 20,000 signatories, including Stephen Hawking, Elon Musk, and Noam Chomsky, as well as many of the leading researchers in the fields of AI and robotics. The petition followed a statement in 2013 by Christof Heyns, the UN special rapporteur on extrajudicial, summary, or arbitrary executions, calling for a moratorium on testing and deploying armed robots. Heyns argued that “A decision to allow machines to be deployed to kill human beings worldwide, whatever weapons they use, deserves a collective pause.”

A pause in developing killing machines until the nations of the world come to agree on limitations on the deployment of autonomous weapons seems sensible. Most nations of the world have signed the Treaty on the Non-Proliferation of Nuclear Weapons, which was one major reasons that several nations, including South Africa, Brazil, and Argentina, dropped their programs to develop nuclear weapons and that those who already had them reduced their nuclear arsenals. Other relevant treaties include the ban on biological and chemical weapons and the ban on landmines.

We note, though, that these treaties deal with items where the line between what is prohibited and what is not covered is relatively clear. When one turns to autonomous weapons, such a line is exceedingly difficult to draw. Some measure of autonomy is built into all software that uses algorithms, and such software is included in numerous weapon systems. At this point, it would be beneficial to discuss three levels of autonomy for weapons systems. Weapons with the first level of autonomy, or “human-in-the-loop systems,” are in use today and require human command over the robot’s choice of target and deployment of force. Israel’s Iron Dome system is an example of this level of autonomy. The next level of weapons, “human-on-the-loop systems,” may select targets and deploy force without human assistance. However, a human can override the robot’s decisions. South Korea has placed a sentry robot along the demilitarized zone abutting North Korea whose capabilities align with this level of autonomy. Finally, there is the level of fully autonomous weapons that operate entirely independent of human input. It seems worthwhile to explore whether the nations of the world, including Russia, China, and North Korea, can agree to a ban on at least fully autonomous weapons.

What is required is the introduction into the world of AI the same basic structure that exists in practically all non-digital systems: a tiered decision-making system.

We suggest that what is needed, in addition, is a whole new AI development that is applicable to many if not all so-called smart technologies. What is required is the introduction into the world of AI the same basic structure that exists in practically all non-digital systems: a tiered decision-making system. On one level are the operational systems, the worker bees that carry out the various missions. Above that are a great variety of oversight systems that ensure that the work is carried out within specified parameters. Thus, factory workers and office staff have supervisors, businesses have auditors, and teachers have principals. Oversight AI systems—we call them AI Guardians—can ensure that the decisions made by autonomous weapons will stay within a predetermined set of parameters. For instance, they would not be permitted to target the scores of targets banned by the US military, including mosques, schools, and dams. Also, these weapons should not be permitted to rely on intelligence from only one source.

To illustrate that AI Guardians are needed for all smart technologies, we cite one example: driverless cars. These are designed as learning machines that change their behavior on the basis of their experience and new information. They may note, for instance, that old-fashioned cars do not observe the speed limits. Hence, the driverless cars may decide to speed as well. The Tesla that killed its passenger in a crash in Florida in 2016—the first known death attributed to a driverless car—was traveling nine miles per hour over the speed limit, according to investigators from the National Transportation Safety Board. An oversight system will ensure that the speed limit parameter will not be violated.

One may argue that rather than another layer of AI, human supervisors could do the job. The problem is that AI systems are an increasingly opaque black box. As Viktor Mayer-Schönberger and Kenneth Cukier note in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think, “Today’s computer code can be opened and inspected … With big-data analysis, however, this traceability will become much harder. The basis of an algorithm’s predictions may often be far too intricate for most people to understand.” They add that “the algorithms and datasets behind them will become black boxes that offer us no accountability, traceability, or confidence.” Jenna Burrell from the School of Information at the University of California, Berkeley, distinguishes three ways that algorithms become opaque: intentional opacity, where, for example, a government or corporation wants to keep secret certain proprietary algorithms; technical illiteracy, where the complexity and function of algorithms is beyond the public’s comprehension (and, we add, even by experts unaided by AI); and scale of application, where “machine learning” or the number of different programmers involved, or both, renders an algorithm opaque even to the programmers. Hence, humans will need new, yet-to-be-developed AI oversight programs to understand and keep operational AI systems in line. A fine place to start is keeping autonomous weapons under control. Also, only an AI oversight system can move fast enough to make a split-second decision to stop a mission in real time—for example, if a child runs into the target area.

One may wonder if the oversight AI systems are not subject to the same challenges faced by the first-line systems. First of all, it helps to consider the purpose and design of the different categories of AI. First-line AI programs are created to increase the efficiency of the machines they guide, and users employ them with this goal in mind. In contrast, AI oversight systems are designed and employed, well, to oversee. Moreover, just like human auditors, various programs build a reputation as being either more trustworthy or less so, and those that are less reliable are less used by those who do seek oversight. And just as in the auditing business, there is room in the field of AI for a third layer of overseers, who could oversee the lower-level oversight system. However, at the end of the day, AI cannot solve the issue raised by philosophers in Ancient Greece—namely “who will guard the guardians?” Ultimately, we are unaware of any way to construct a perfect system.

Finally, this is not meant to leave humans out of the loop. They not only are the ones to design and improve both operational and oversight AI systems, but they are to remain the ultimate authority, the guardian of the AI Guardians. Humans should be able to shut down both operational and oversight AI systems—for example, shutting down all killing machines when the enemy surrenders, or enabling a driverless car to speed if the passenger is seriously ill.

Finally, we hold that the study of killing machines should be expanded to include the opposite question: whether it is ethical to use a person in high-risk situations when a robot can carry out the same mission as well, if not better. This question applies to clearing mines and IEDs, dragging wounded soldiers out of the line of fire and civilians from burning buildings, and ultimately, fighting wars. If philosophers can indulge in end-of-the-world scenarios engineered by AI, then we can speculate about a day when nations will send only nonhuman arms to combat zones, and the nation whose machines win will be considered to have won the war.

Job collapse?

Oddly, the area in which AI is already having a significant impact and is expected to have major, worldwide, transformative effects is more often discussed by economists rather than by AI mavens. There is strong evidence that the cyber revolution, beginning with the large-scale use of computers and now accelerated by the introduction of stronger AI, is destroying many jobs: first blue-collar jobs (robots on the assembly line), then white-collar ones (banks reducing their back office staff), and now professional ones (legal research). The Bureau of Labor Statistics found that jobs in the service sector, which currently employs two-thirds of all workers, were being “obliterated by technology.” From 2000 to 2010, 1.1 million secretarial jobs disappeared, as did 500,000 jobs for accounting and auditing clerks. Other job types, such as travel agents and data entry workers, have also seen steep declines due to technological advances.

The legal field has been the latest victim, as e-discovery technologies have reduced the need for large teams of lawyers and paralegals to examine millions of documents. Michael Lynch, the founder of an e-discovery company called Autonomy, estimates that the shift from human document discovery to e-discovery will eventually enable one lawyer to do the work that was previously done by 500.

These developments by themselves are not the main concern; job destruction has occurred throughout human history, from the weaving loom replacing hand-weaving, to steam boats displacing sail boats, to Model T cars destroying the horse-and-buggy industries. The concern, however, is that this time the new technological developments will create few new jobs. A piece of software, written by a few programmers, does the work that was previously carried out by several hundred thousand people. Hence, we hear cries that the United States and indeed the world are facing a job collapse and even an economic Armageddon.

Moreover, joblessness and growing income disparities can result in serious societal distributions. One can see already that persistently high levels of unemployment in Europe are a major factor in fomenting unrest, including an increase in violence, political fragmentation and polarization, a rise in anti-immigrant feelings, xenophobia, and anti-Semitism.

Some economists are less troubled. They hold that new jobs will arise. People will develop new tastes for products and especially services that even smart computers will be unable to provide or produce. Examples include greater demand for trained chefs, organic farmers, and personal trainers. And these economists point out that the unemployment rate is quite low in the United States, to which the alarmed group responds by pointing out that the new jobs pay much less, carry fewer benefits, and are much less secure.

The research community should be called on to provide a meta-review of all the information available on whether or not the nation faces a high and growing job deficit.

Given the significance and scope of the economic and social challenges posed by AI in the very immediate future, several measures seem justified. The research community should be called on to provide a meta-review of all the information available on whether or not the nation faces a high and growing job deficit. This is a task for a respected nonpartisan source, such as the Congressional Research Service or the National Academy of Sciences. If the conclusion of the meta-review is that major actions must be undertaken to cope with the side effects of the accelerating cyber revolution, the US president should appoint a high-level commission to examine what could be done other than try to slow down the revolution. The Cyber Age Commission that we envision would be akin to the highly influential 9/11 Commission and include respected former officials from both political parties, select business chief executive officers and labor leaders, and AI experts. They would examine alternative responses to the looming job crisis and its corollaries.

Some possible responses have been tried in the past, including helping workers find new jobs rather than trying to preserve the jobs of declining industries. In the United States, for example, Trade Adjustment Assistance for workers provides training and unemployment insurance for displaced workers. Another option would be government efforts to create jobs through major investments in shoring up the national infrastructure, or by stimulating economic growth by printing more money, as Japan is currently attempting.

More untested options include guaranteeing everyone a basic income (in effect, a major extension of the existing Earned Income Tax Credit); shorter work weeks (as France did but is now regretting); a six-hour workday (which many workplaces in Sweden have introduced to much acclaim); and taxes on overtime—to spread around whatever work is left. In suggesting to Congress and the White House what might be done, the commission will have to take into account that each of these responses faces major challenges from deeply held beliefs and powerful vested interests.

In the near future, societies may well need to adapt to a world in which robots will become the main working class and people will spend more of their time with their children and families, friends and neighbors, in community activities, and in spiritual and cultural pursuits.

The response to the cyber revolution may need to be much more transformative than the various policies mentioned so far, or even than all of them combined. In the near future, societies may well need to adapt to a world in which robots will become the main working class and people will spend more of their time with their children and families, friends and neighbors, in community activities, and in spiritual and cultural pursuits. This transformation would require some combination of two major changes. The first would be that people will derive a large part of their satisfaction from activities that cost less and hence require only a relatively modest income. Such a change, by the way, is much more environmentally friendly than the current drive to attain ever higher levels of consumption of material goods. The second change would be that the income generated by AI-driven technologies will be more evenly distributed through the introduction of progressive value-added tax or carbon tax, or both, and a very small levy on all short-term financial transactions.

The most important service that the Cyber Age Commission could provide, through public hearings, would be to help launch and nurture a nationwide public dialogue about what course the nation’s people favor, or can come to favor. If those who hold that the greatest challenges from AI are in the economic and social realm are correct, many hearts and minds will have to be changed before the nation can adopt the policy measures and cultural changes that will be needed to negotiate the coming transformation into an AI-rich world.

It’s the Partnership, Stupid

In 1990, the economist Nathan Rosenberg declared that “the linear model of innovation is dead.” Unfortunately, the report of this death was, to paraphrase Mark Twain, an exaggeration. More than 25 years later, much research in universities, government, and industry is justified by invoking the linear view of innovation advocated by Vannevar Bush in his 1945 manifesto Science: The Endless Frontier. Bush argued for unfettered curiosity-driven basic research on problems chosen by individual researchers whose main goal was the pursuit of new knowledge. He believed that newly discovered knowledge would inevitably launch applied research projects, leading to commercial products that would be developed for appropriate markets.

Bush’s linear model was simple and clear, but unfortunately rarely worked. Even Nobel prizes in physics often sprang from projects with practical orientation, such as the invention of the transistor to replace vacuum tubes that later led to the discovery of the transistor effect. Similarly, Arno Penzias and Robert Woodrow Wilson’s practical work on improving microwave communications led to their Nobel Prize for finding the cosmic background radiation from the big bang.

Scholars of innovation and researchers alike have long realized that the linear model was flawed and that research successes often emerged from academic scientists working with practitioners on real problems. In his 1977 book, Managing the Flow of Technology, Thomas Allen, an organizational psychologist at the Massachusetts Institute of Technology, presented an evidence-based attack on the linear model that made it clear that research excellence often came from close collaborations with practitioners who faced real problems. Donald Stokes’s influential 1997 book, Pasteur’s Quadrant, celebrated Louis Pasteur’s work on solving the problems of vintners whose fermentation processes failed or farmers whose milk went bad. Pasteur came up with the germ theory of disease as well as early attempts at vaccinations. A powerful lesson from Pasteur is that working on real-world problems jointly with practitioners often leads to the “twin-win”: a validated theory that can be published and a tested solution that can be widely disseminated. Stokes has had some influence, but belief in the linear model remains strong, as do the academic incentives and rewards that reflect the model. Researchers who have benefitted from long-term funding for discovery-based research are well-established and have committed supporters in government and policy circles. As recently as March 2017, a hearing of the House Committee on Science, Space and Technology’s Subcommittee on Research and Technology featured three leaders of the national research establishment who encouraged support for Vannevar Bush’s model.

But this widely held belief about how to conduct research is being challenged by a growing community of scholars who are promoting a different set of research principles and are beginning to change attitudes at campuses, funding agencies, and businesses. Increasingly, collaborations between academics and practitioners focus on building teams that take a theory-driven approach to working on real-world problems. The best outcome from these teams is the twin-win of validated theories and practical solutions that quickly diffuse in society. Twin-win collaborations bring academics closer to real problems, so that when solutions are proposed they can be tested in real-world situations.

In the 2016 book The New ABCs of Research, Shneiderman (the first author of this article) outlines how scientific methods can be productively combined with engineering methods and design thinking to make discoveries and develop innovations. The book advocates “applied and basic combined” to “achieve breakthrough collaborations.” In Cycles of Invention and Discovery, also published in 2016 (and reviewed by G. Pascal Zachary in this issue), former Harvard engineering dean Venkatesh Narayanamurti and University of Virginia’s Tolu Odumosu also rebel against Vannevar Bush, arguing that the artificial separation between applied and basic research is counterproductive. They dig deeply into the history of how the linear model became entrenched in policy circles and propose to reform academic policies and shift government funding. Taking this line even further, a group of information visualization researchers argues in a provocatively titled 2017 paper, “Apply or Die,” that researchers must apply their work to real problems or risk becoming irrelevant.

These and other writings are productively challenging university leaders to change their research communities and reward structures. A common thread is the importance of incentives for academic scientists to work with business, government, and nongovernmental organizations to produce high-impact research that leads to influential publications while also helping to address the challenges of the day. National Medal of Science recipient Shirley Ann Jackson, president of Rensselaer Polytechnic Institute (RPI), calls for “The New Polytechnic.” She encourages interdisciplinary work to attack the hard challenges of the world, while creating a new partnership model for interactions between the university and the world outside academia.

One productive form of campus interdisciplinary research brings together those with a problem to work with those who have an appropriate method for solving that problem. At the University of Maryland, for example, our work with off-campus partners such as the US Holocaust Memorial Museum, supported by a Department of Interior grant, led to the development of the highlighted link that is fundamental to World Wide Web usage. Another satisfying success was our work with a banking-machine manufacturer that led to the small touchscreen keyboards that are a key technology in smartphones. These collaborations led not only to the solution of real problems but to publications in top computer science and other disciplinary journals and conferences.

Guidelines for working with practitioners

Our experiences at the University of Maryland and RPI show that the key to the success of partnerships between academic researchers and practitioners with problems to be solved is to have well-considered plans that respect the goals of all participants. Of course, there are many principles of team formation, such as including an effective experienced leader and ensuring diversity in seniority, gender, disciplines, research methods, and personality. But making teamwork successful depends above all on partnerships built on four essential pillars of collaboration.

Agree on project goals from the start. The key to successful projects is mutual understanding of what the goals are. When practitioner partners come to faculty members asking for help in solving well-understood problems that have little academic interest, university researchers have little motivation to collaborate. Conversely, when faculty members assert that their research will help solve some problem or other without working with practitioners to define the problem, there is little hope for success. Project goals must serve both practitioners’ needs, such as developing or improving a product or service, and academics’ aspirations to achieve advances in breakthrough theories that can be published in refereed journals and presented at conferences.

Of course, goals can change, but starting out with a written set of goals to be achieved within specific time frames helps keep everyone moving in the right direction. As the team forms, discussions to achieve consensus on the goals helps build team spirit, enables senior and junior members to exchange ideas, and allows everyone involved to learn about differing work styles within the team.

Discuss budgets, schedules, and data sharing. Long-term objectives such as “grand challenges,” road maps, or the UN Sustainable Development Goals are admirable guides for broad programmatic priorities, but successful individual projects need short-term goals so that tasks can be assigned to individuals and coordinated schedules can be established. Discussion of goals and tasks, with resolution of differences, also builds trust among team members. Resource allocation decisions provide the opportunity to clarify who needs equipment, staff, and funds. These discussions can be tense, but skillful leaders know that resolving such issues early promotes success. Another difficult issue can be data sharing, since corporations may want to protect data for competitive advantage and government data can have privacy restrictions. The University Industry Demonstration Partnership has developed a detailed set of principles and recommendations for data use agreements that cover issues such as who supplies the data, who is responsible for curating it, how long it will be kept, who will be able to access it, and how it will be archived or disposed of at the completion of the partnership.

Resolve intellectual property ownership and credit for outcomes. Since disagreements about intellectual property ownership, credit for outcomes, patenting, and publication can be contentious, early discussions and careful documentation are helpful processes. As collaborations are being formed, identifying each partner’s background intellectual property helps set the stage. Then agreements about who will pursue and own patents or copyrights clarify responsibilities. Since academics are eager to publish and present results, a clear timetable for review and submission of papers ensures that all parties have a common understanding.

Develop partnerships at the technical and managerial levels. For large projects, success depends on having technical and managerial team members who work together to bridge their cultural differences. As an example, the recently announced Center on Health Empowerment by Analytics, Learning, and Semantics—we call it HEALS—is a multiyear partnership between IBM and RPI that includes coordination across many levels. The center has technical members who cooperate on specific projects, technical leads from IBM and RPI who oversee operations, a steering committee at the level of vice-presidents at each organization that reviews projects on a regular basis, and an executive committee that will perform a yearly review of the center’s progress. The advantage of these layers of interaction is that they guarantee that as corporate priorities change in response to new business needs or as academic personnel change over this long-term partnership, the overall center is able to maintain continuity in pursuing the joint research interests.

Developing successful partnerships is hard work, but it can produce historic breakthroughs. A wonderful example is the effort by Rita Colwell, a former National Science Foundation director and National Medal of Science winner, to reduce cholera following monsoon floods in Bangladesh. In the late 1990s, she assembled a team of scientists and public-health workers in Bangladesh that developed a simple filtration strategy using women’s cotton saris that could trap the plankton carrying thousands of cholera bacteria. Local public health-workers trained the women in 65 villages with 133,000 people on how to do water filtration. They collected mortality data from hospitals, showing a dramatic 48% reduction in cholera deaths. In the next decade, this astonishing twin-win result led to strong papers in leading journals presenting valuable knowledge about how epidemics spread, how they can be limited, and how the simple filtration methods can be sustained.

The culture is changing

Even when academic researchers make warm partnerships with practitioners, they must still deal with academic review committees for hiring, promotion, and tenure that too often focus on individual performance and theoretical contributions within a single discipline. In addition, funding agency review panels and journal or conference peer reviewers typically contain members who admire narrowly defined theoretical projects over larger applied efforts.

The good news is that a growing number of campuses are changing their culture. There are growing pressures for academics to justify their funding in terms of their impact on industry, education, and public policy. The twin-win here is that there is good reason to believe that the pressure of producing impact leads to significant theoretical results. To promote these types of synergies, the University of Southern California revised its tenure policies to recognize collaborations and Duke University offers faculty contracts that stipulate the kind of interdisciplinary work tenure-seeking faculty plan to do. Another example of change is that more than 45 campuses in North America now treat patents as having equal value to published papers. A related movement at many campuses is to seek engagement with local, state, or regional organizations to promote economic development. The University of California’s Center for Information Technology Research in the Interest of Society funds researchers at four campuses to conduct advanced projects that benefit the state. Working under the inspiration and discipline provided by real-world problems can inspire more creative thinking—and more realistic solutions. In fact, as many federal agencies are now opening “innovation centers” in Silicon Valley to understand how to make government projects more agile, scholars who have analyzed the success of the research process at Google conclude that research must go hand-in-hand with development to create real innovation.

Finally, especially with increasing pressure from Congress for research that can serve the national interest, funding agencies are figuring out how to break out of their traditional domain-oriented silos to encourage work that is highly collaborative and to reward projects that have the potential to transition to practice. Although there has been a long history of large centers awarding grants that incentivize or require interaction between researchers and industry—for example, from the National Science Foundation’s Engineering Research Centers, the Department of Homeland’s Security Centers of Excellence, and the Department of Energy’s Innovation Hubs—this ethos has not generally trickled down to the smaller grants that support most researchers in the United States and many other countries.

This attitude is starting to change. National organizations such as the Government-University-Industry Research Roundtable of the National Academies of Science, Engineering, and Medicine, and the Association for Public and Land-grant Universities, are supporting ongoing efforts to spread the word about twin-win strategies, and funding agencies are beginning to embrace such strategies at the project level. For example, the National Science Foundation’s Algorithms in the Field program “encourages closer collaboration between two groups of researchers: (i) theoretical computer science researchers, who focus on the design and analysis of provably efficient and provably accurate algorithms for various computational models; and (ii) applied researchers including a combination of systems and domain experts.” Other programs in such fields as cybersecurity, data science, and resilient infrastructure also encourage collaborations and problem-centric research. We applaud these experimental programs and encourage more of this kind of thinking to further collapse the artificial and inhibiting boundaries between theoretical and applied research. They represent a gradual shift in research funding priorities that can have the effect of accelerating the advance of fundamental knowledge and real-world problem solving.

When academics partner with practitioners from government, industry, and nongovernmental organizations, new opportunities are created to define problems that have interest to academics and value to practitioners. This mutually beneficial situation can lead to the twin-win: theoretical advances and published papers in peer-reviewed journals, as well as widely disseminated solutions that bring value to society. The linear model is dead! Long live the twin-win!

The Energy Rebound Battle

In the early 1990s, the resource economist Harry Saunders started asking hard questions about energy efficiency programs. Climate change at that time had only recently come to wide public attention. But already, dramatic improvements in energy efficiency figured centrally in most estimations of what to do about the problem.

Two factors conjoined to push this view. One was that energy efficiency represented a seemingly costless path to lower emissions, a way for politicians to reduce emissions without imposing high energy costs on their constituents. The other was that energy efficiency already figured prominently in the environmental agenda; in the late 1970s, green energy guru Amory Lovins had bundled radical efficiency improvements together with wind and solar energy technologies in what he dubbed the “soft energy path,” the alternative to both fossil and nuclear energy.

The problem, as Saunders saw it, was that costless energy savings—say, an energy-efficient light bulb that could light a room using half as much energy as a less efficient one—functionally reduced the cost of lighting a room. Having written his dissertation on how economies had responded to energy price changes after the Arab oil embargo, and subsequently building a consulting practice advising manufacturers how to deploy capital investments to maximize their productivity, Saunders knew that when the cost of a service or commodity declines, consumption tends to go up.

This phenomenon is known today as the rebound effect. Energy efficiency promotes a rebound in energy use, thus eroding the reductions in consumption that more efficient technologies would otherwise be expected to yield.

Saunders’s insight wasn’t a new one. The great nineteenth century economist Stanley Jevons had made the same observation about coal and more efficient steam engines. Jevons argued, correctly, that improving the efficiency of steam engines wouldn’t result in less coal use, but rather would reduce the cost of using coal to operate steam engines, resulting, ultimately, in higher use. But in the midst of the energy crises of the 1970s, Jevons’s paradox had largely been forgotten.

Still, Saunders assumed that more efficient technologies would, to some degree at least, result in lower energy use and hence lower carbon emissions. The question was how much lower, and Saunders set out to figure that out.

Saunders created a simple model of the global economy and started fiddling with the economic productivity of energy. This, in economic terms, is what more efficient energy technologies represent: an energy productivity improvement. What Saunders found surprised him. When he increased energy productivity in his model, global energy consumption went up, not down. As the effective cost of energy declined thanks to more efficient technologies, firms found more ways to use it. Saunders’s finding was due, in part, to the fact that the model he had built assumed that all inputs to economic production could be substituted for one another with no additional effort or cost.

But the economy doesn’t actually work that way. Saunders had used a production function, an equation that economists use to describe and constrain how firms swap out one input for another in response to prices, that assumed no limit to how much energy could substitute for other inputs. The more energy productivity improved, the cheaper energy inputs became and the more energy firms substituted for other inputs. This sort of substitution could in theory go on until energy was the only input into production.

In the real world, there is nothing that can be made with pure energy and no machines, no raw materials, and no labor. So Saunders started using different production functions that assumed that inputs could more or less easily be substituted for each other. And though the results varied, the underlying driver of the results did not. The effectiveness of energy efficiency improvements as a means of reducing energy use hinged entirely on the question of how easily energy could be substituted for labor, capital, and materials.

In 1992, Saunders published a paper in Energy Journal introducing what he called the Khazzoom-Brookes Postulate. Saunders generously named the postulate after two fellow economists who had raised similar concerns during the 1980s. But Saunders, in his postulate, was the first person to state the proposition in the formal mathematical language of neoclassical economic theory. If energy could be easily substituted for a range of other inputs to economic production, improvements in energy efficiency could over the long term result in higher global energy consumption. It all depended on what economists call the elasticity of substitution—that is, how easily one input, in this case energy, can substitute for others in response to changes in their cost. What at first might have seemed an obscure academic question about the proper production function to use to estimate energy savings from energy efficiency improvements turns out to have rather momentous implications for how difficult it will be to mitigate climate change. Saunders would spend the next two decades trying to quantify those implications.

He’s lost that Lovins feeling

Saunders’s postulate was not well received among those who had touted energy efficiency as a costless remedy to the nation’s energy challenges. Energy efficiency was, in the words of Lovins, “a lunch you get paid to eat,” and nobody, from policy makers looking for a quick and easy way to address climate change to companies selling energy-efficient technologies to environmental groups opposed to new energy development, was much interested in learning that the energy and emissions reduction benefits might be less than advertised.

In the years after the publication of Khazzoom-Brookes, a series of studies seemed to suggest that the rebound effect wasn’t worth worrying about. When consumers insulated their homes and installed more efficient appliances and lighting, they appeared to use those amenities a little bit more, but not a lot more. Most people, it seemed, weren’t going to leave the lights on all night or turn their thermostats up to 90 in the winter just because it was cheap to do so.

The critical studies weren’t terribly definitive. They had looked at a very small number of energy end uses, mostly in the home and almost exclusively in affluent developed economies. But they did offer sufficient evidence for what most people paying attention to the issue already believed—that energy efficiency was a key pathway, maybe the key pathway, to reducing energy use and fighting climate change. For the next two decades, the debate about rebound effects quietly raged on among energy analysts, occasionally drawing broader attention from journalists and politicians, but mostly playing out in obscure peer-reviewed journals and in assessments by government tribunals such as the International Energy Agency (IEA).

For efficiency advocates, the issue was mostly viewed as a nuisance, something that needed to be swatted away so that the world could get on with the business of radically reducing demand for energy. After articles appeared in the late 1990s in the New York Times and New Scientist discussing the rebound debate, the academic journal Energy Policy commissioned Lee Schipper, an energy economist at the Lawrence Berkeley National Laboratory, to edit a special issue on the matter. It was published in 2000. In its introduction, Schipper, a rebound skeptic, compared rebound to the Loch Ness monster, a mythical beast that reappeared from time to time but whose existence could not be confirmed.

That world-weary posture has been the default position of efficiency advocates ever since. “Every few years,” David Goldstein and his colleagues at the Natural Resources Defense Council wrote, in response to a review of the peer-reviewed literature on rebound by my organization in 2011, “a new report emerges that tries to resurrect an old hypothesis: that energy efficiency policy paradoxically increases the amount of energy we consume.”

But as concern about climate change has grown, attention to rebound effects has increased. Forty percent or more of greenhouse gas emissions reduction in most climate mitigation scenarios are predicated on lower energy use due to more efficient technologies. If those energy savings are substantially eroded due to rebound effects, the scale of the emissions reduction challenge becomes much larger.

For environmental groups, which almost universally subscribe to the soft energy path, the stakes are higher still. Without dramatic reductions in global energy use through radically more efficient technology, a return to a world entirely powered by renewable energy sources, the holy grail of the green environmental and energy agenda and a debatable prospect to begin with, becomes completely implausible.

And though the growing literature on rebound effects remains deeply contested—there is often little agreement on what even counts as rebound and even less on how to calculate it—estimates of how large the rebound effect may be have risen over time, as more studies have been conducted across a broader range economic conditions. Summarizing the evidence in his introduction to the special Energy Policy issue, Schipper suggested rebound of 10-40% depending on the sector and economy in question. Since that time, assessments by the United Kingdom, the Organization for Economic Cooperation and Development, the Intergovernmental Panel on Climate Change, and the IEA have all concluded that rebound effects are likely significantly larger.

By 2015, Gernot Wagner, at the time the top economist at the Environmental Defense Fund, a long-time efficiency champion, had acknowledged that rebound effects probably ranged from 20% to 60%, a level that he judged to be encouraging insofar as it seemed likely, to him at least, that 50% or more of the engineering-based estimates of energy savings due to efficiency improvements might ultimately be realized.

The rebound debate has also become more acrimonious as the stakes have risen. Efficiency advocates have characterized rebound scholarship as an “attack” on energy efficiency and suggested that those who believe rebound effects to be significant are in effect arguing that reducing energy efficiency must therefore be part of the solution to climate change. One well-known efficiency advocate has gone so far as to label rebound proponents “efficiency deniers,” a characterization seemingly designed to echo the polarizing language of “climate denial.”

Here be monsters

While “Nessy,” as Schipper had it, continued to make occasional appearances, Saunders kept plugging away. Having established theoretically the factors that would determine the extent to which energy productivity enhancements would save energy rather than rebound in the form of new production and consumption, Saunders set about devising a method to answer the question empirically.

Saunders’s idea was to build an econometric model of every production sector of the US economy and then crank in real data on prices, inputs, and outputs, looking back over 45 years (1960-2005). With that data in hand, he would model two counterfactual scenarios, one in which there was no rebound effect and one in which rebound was 100%, meaning that all of the energy savings from more efficient technology was taken back in the form of new production. He could then compare these calculations with actual energy use in each sector in order to estimate how much of the savings due to energy-efficient technology had been taken in the form of lower energy consumption versus increasing production.

It was a clever approach to the problem. But before he could undertake that analysis, Saunders would need to identify a production function that was flexible enough to accommodate a range of potential behaviors by firms and that wouldn’t predetermine the result of the analysis. As Saunders had discovered in his early work on Khazzoom-Brookes, an analyst wanting to show high rebound could simply choose a production function in which substitution was easy, while an analyst wanting to show low rebound could choose a function in which substitution was very hard. Saunders wanted to find a function that didn’t overly constrain or unduly allow substitution of energy for other inputs, so that he could empirically derive substitution elasticities from the data.

Over a number of years, Saunders tested different production functions to see how they affected the outcome of rebound simulations. In 2008, he published a summary of that work, suggesting two functions for analyses of rebound that appeared to allow for a wide range of substitution elasticities across a broad range of heterogeneous sectors of the economy. Saunders could finally set about the work of building an econometric model that would be capable of testing Khazzoom-Brookes empirically.

It took Saunders four more years to publish his results. Using data painstakingly assembled by Harvard economist Dale Jorgenson from a variety of government and industry sources going back many decades, Saunders estimated that about 60% of energy savings from more efficient technologies had been plowed back into the production process. Six sectors had seen outright backfire in short order, meaning that all of the energy savings associated with technical efficiency improvements were lost to higher energy use. These included energy-intensive sectors such as electric utilities, primary metal, and mining.

Saunders published the disquieting results in 2013. His article, in the journal Technological Forecasting & Social Change, provided the first definitive and carefully quantified estimates of long-term rebound in production sectors of the US economy. Most early empirical studies had looked at end-use energy consumption in the United States and Europe, focusing on consumer energy-intensive energy uses, such as home heating, residential lighting and appliances, and driving. But two-thirds of energy consumption occurs in the production sectors of the economy: to construct the homes we live in and the buildings we work in, to grow and transport the food we purchase at the supermarket, to manufacture the goods we purchase at the shopping mall, and to build and operate the infrastructure that allows us to move people and goods among all those places.

It was here that Saunders had consistently found levels of rebound that were much higher than was typically found among end-use consumers in rich countries, where demand for energy services such as lighting, transport, and heating quickly saturates, meaning that consumers have little desire to consume more of them. By contrast, energy productivity improvements in production sectors create all sorts of new production possibilities—to substitute energy for labor or other resource inputs; to produce goods at lower cost, thereby allowing higher consumption; and to invent new products and services that are made possible only by greater efficiency.

Energy-saving technological change enabled not only more efficient provision of existing energy services but also new and expanded uses of energy. LED lighting allowed us to put lights in all manner of places we couldn’t put them before. Liquid crystal display screens allow us to put video screens onto skyscrapers, inside taxicabs, and into our pockets. A family that once had a single inefficient refrigerator might now have an efficient one in the kitchen, a freezer in the basement, a wine cooler in the bar area, and a portable electric cooler for the car.

All those LEDs and LCDs and mini refrigerators might still not raise the user’s energy consumption due to their much higher efficiency. But the production side of the equation is a different story. Manufacturers produced more of all those things, here and abroad, and further efficiencies at the production level meant that costs to consumers could be further reduced, making new production more profitable and new consumption more affordable.

It wasn’t necessarily that energy productivity improvements alone drove these developments. Often, new and more energy efficient technologies in the production sector brought other productivity factors along for the ride, raising labor and resource productivity in a variety of ways, making everything else more productive, too. This is called total factor productivity, and when you raise it, costs go down while output and consumption typically go up. That brings with it a general benefit to economic welfare, but also higher energy consumption, all else being equal.

The implications of Saunders’s findings are all the more significant globally, where demand for energy is much less saturated than it is in wealthy economies such as the United States. More efficient lighting, heating, cooking, and refrigeration allow poor populations living in energy poverty to consume much more energy. Beyond the household, energy-intensive production sectors such as steel, cement, chemical manufacturing, and refining are expected to grow enormously across the globe over much of this century, as emerging economies worldwide build the basic infrastructure of modernity. These are precisely the sectors that both historical analysis and Saunders’s studies suggest are most prone to backfire.

The soft path plays hardball

Even before Saunders published his 2013 analysis, efficiency advocates went to work attempting to discredit it. Saunders had shared a prepublication copy widely with other analysts and presented a version of it at a 2011 workshop on rebound effects hosted by Carnegie Mellon University. In 2012, two efficiency consultants from an outfit called CO2 Scorecard demanded that Saunders’s paper be retracted, claiming, incorrectly, that he had based his analysis on the monetary value of energy inputs, not the quantities of energy being consumed. The claim was based on criticisms originally made by two prominent efficiency researchers, Jon Koomey of Stanford University and Danny Cullenward, then a Stanford PhD student and now a research fellow at the University of California, Berkeley.

Koomey and Cullenward were subsequently forced to concede that Saunders did in fact utilize primary data on physical quantities of energy inputs. But their concession did not come before the supposed “debunking” had been widely disseminated on blogs at the liberal Center for American Progress and at UC-Berkeley.

Undeterred, Koomey and Cullenward created a new pretext on which to dismiss Saunders’s findings, claiming in a 2016 response published in the same journal as the original analysis that because Saunders had not accounted for regional differences in energy prices, his results were invalid. The pair had conducted no independent analysis. They simply asserted that having failed to include price differences, Saunders’s conclusions were false. Writing at Koomey’s website, the pair went further, characterizing Saunders’s findings as “aggressive and unsubstantiated” and “wholly without support.”

So Saunders reran his analysis with a wide range of energy price sensitivities. In early 2017, he published his new findings, again in Technological Forecasting & Social Change, demonstrating that marginal price variations were in fact immaterial to the earlier result.

This sort of give and take might, generously, be chalked up to the normal processes of scientific progress. Researchers find problems in existing scholarship, and subsequent scholarship then addresses those shortcomings. But it is hard to read the public attacks on Saunders’s work and conclude that Koomey, Cullenward, and other efficiency advocates were acting in good faith.

Rather, they seized on one purported shortcoming after another in an effort to discredit Saunders’s findings. The criticisms weren’t constructive. They made no suggestion as to how the alleged shortcomings in the Jorgenson data set might be rectified, or even to ascertain, as was easily done, that the data set did contain primary data on energy input quantities. Nor did they contemplate undertaking their own independent modeling exercises to determine whether regional variances in marginal energy prices might suggest different levels of rebound. The intent, it would appear, was not to advance better understanding of rebound effects but to suppress that understanding.

Modeling backfires

The long-running debate about rebound might not be otherwise settled, but it would almost certainly be less contentious were the issue not so tied up with persistent debates about climate change and the energy future. Saunders’s modeling of rebound in the production sectors of the US economy is an impressive analytical accomplishment, and it adds to a growing literature suggesting that rebound in the aggregate is likely to take back a very substantial portion of the emissions savings that many energy analysts and climate advocates have long counted toward climate mitigation. But even putting aside the rear-guard sniping about Saunders’s data and methods, basic questions of causation remain, questions that are more a reflection of the limits of knowledge about the future than the methods of econometric modeling.

Because energy productivity is so tied up with other factors of production and consumption, no clever econometric model can tell us whether, had incandescent light bulbs not come along, the Earth at night, when viewed from outer space, would instead be illuminated with hog fat lanterns and wax candles, to take a particularly absurd example. If the alternative to electric lights had been a world lit by hog fat, then incandescent light bulbs and subsequently LEDs would have resulted in enormous energy savings. Or to take another example, if you think that without the development of LCD screens we would all have 50-inch cathode ray television sets on our walls and cathode ray smartphones in our pockets, then the development of vastly more efficient LCD technology has also been a huge energy saver.

The problem is that whereas the LCD screen was one of a series of enabling technologies that made smartphones possible, it didn’t cause us to invent them, exactly. And that, for the most part, has been the story of energy productivity improving technology for over two centuries. More efficient technologies often initially provide benefits to existing economic activities and forms of production. Better steam engines initially reduced the amount of coal that was needed to pump water out of mines. But the more important benefits were ultimately all sorts of new uses for the technology not envisioned initially. James Watt had no idea that the technological revolution that he unleashed with his newfangled steam engine would ultimately power trains and electrical generators. Neither had been invented at the time.

That history continues today. LCDs might not be the cause of smartphones, but the existence of smartphones erode the savings that an engineering-based estimate of the energy savings associated with replacing cathode ray televisions with LCDs in, say, the year 2000 would have arrived at. The same is true at the macroeconomic level. Implicit in long-term projections of economic growth, and the energy use that comes with it, are broadly recognized but impossible-to-predict interactions among energy productivity, multifactor productivity, and economic growth.

Climate mitigation models attempt to account for these dynamics by using historical trends to project economic growth and energy intensity decline in baseline scenarios. These projections ostensibly account for rebound because it is a component of both economic growth trends and energy intensity trends. But the mitigation scenarios also include additional and specific efficiency improvements that fail to account for either the growth effects or the substitution effects associated with those improvements.

The consequences of this oversight are considerable. The International Energy Agency continues to tout energy efficiency as the “first fuel” available to member countries to constrain energy use, while the Intergovernmental Panel on Climate Change follows suit, presenting forecasts showing energy efficiency to be the best lever for reducing emissions of greenhouse gases in the coming decades. However, both organizations rely on energy models that assume highly rigid productive economies with minimal flexibility to accommodate energy efficiency gains. The IEA, for example, assumes rebound will be no more than 10% in the coming decades.

To illustrate the implications of this IEA assumption, if one were to instead assume rebound will be 50%, a figure that can be easily supported by the growing literature on rebound effects, meeting the carbon emissions targets contemplated in the IEA “New Policies Scenario” would require global clean energy deployment about one-third higher than the agency’s already ambitious targets, about 4.7 Terawatt-hours of additional clean energy by 2035, or slightly more than total US electric power production in 2016.

How efficiency matters

While the rebound debate rages on, Saunders continues to pull on the thread that he first began to unravel with the Khazzoom-Brookes postulate 25 years ago. In 2014, Saunders published a paper in Ecological Economics that extended his analysis to the long-term evolution of market economies. Using the same sort of theoretical framework that he used to establish Khazzoom-Brookes and many of the same well-established economic principles, Saunders demonstrated that under highly plausible conditions—namely, well-functioning markets, population stabilization, and saturating demand for further consumption—market economies would, in theory, evolve toward zero growth and declining demand for natural resources, an unexpected conclusion from a researcher widely criticized as believing that there is no alternative to endless unchecked economic growth, and hence energy demand.

Those conclusions should remind us that intuitions and assumptions, not to mention political orthodoxies, about cause-and-effect relations for incredibly complex problems such as climate change should continually be subject to critical analysis. And ideas that challenge present orthodoxies, such as those around the costless emissions reductions that might be achieved through energy-efficiency programs, often open up new possibilities and frameworks for making progress.

The rebound debate mostly obviously shows how our beliefs about how the world ought to work influence our willingness to accept some scientific findings and our inclination to reject or ignore others. But more importantly, resistance to evidence of the limits of pursuing energy efficiency as a strategy for addressing climate change has blinded many scientists and advocates to more fundamental understandings of the relationship between energy use and human development that, after all, is the reason we care about climate change in the first place.

Recognizing that quite significant levels of energy-efficiency rebound are a likely result of efficiency gains in many cases and in the global aggregate is not an argument against energy efficiency, as some on both sides of the debate have suggested. Nor will improving energy efficiency inevitably result in higher energy use. Rather, rebound is a crucial indicator of long-term progress toward a more equitable and sustainable world.

Rising energy productivity and rising energy use are inexorably entwined with broader ecological modernization processes. As populations become wealthier around the world—thanks in no small part to increasing energy productivity—fertility rates decline, population growth slows, and population stabilizes. As those populations achieve modern living standards, material consumption begins to saturate, as it has in the industrialized world. The low levels of rebound measured in end-use sectors of the economy in wealthy economies are evidence of this dynamic. As material demands saturate, the structure of economies shifts, from output that is skewed toward agriculture, manufacturing, and other energy intensive forms of production toward knowledge and service sectors that have much lower energy intensities.

Counter to commonly held intuitions, it could even turn out that the faster energy consumption grows in the short- and medium-term, the sooner energy use and emissions will peak and the lower that peak will be in the long-term. But this also means that efficiency won’t turn the tide of rising energy use anytime soon, and it won’t likely make the difference in allowing us to meet mid-century emissions targets. Ultimately, progress toward mitigating climate change will primarily depend not on how quickly we boost energy efficiency, but on how quickly we are able to replace fossil-based sources of energy with carbon-free energy.

What is most important about improving energy efficiency is that it will help create the conditions necessary to both better mitigate climate change and manage the impacts that can’t be avoided. That’s because improving energy efficiency is welfare-enhancing irrespective of its climate benefits. A wealthier global population is a healthier population and one that will be more resilient to climate impacts. It will also be better able to bear the costs necessary to reduce emissions by building a low-carbon energy system. That should be cause for optimism, not pessimism.

Back from the Brink: Truth and Trust in the Public Sphere

It is 2017. Do you know where the truth is? Hardly a day passes without some major accusation in the media that the nation’s highest office has become a source of unfounded stories, claims without evidence, even outright lies. As the charges against the executive branch pile up, the White House counters that institutions long seen as standing above partisan wrangling can no longer be trusted: the Federal Bureau of Investigation, the Central Intelligence Agency, the Congressional Budget Office, the federal judiciary have all felt the heat of presidential pushback. In this topsy-turvy world it hardly seems surprising that the newly appointed Environmental Protection Agency administrator rejects two decades of findings by the Intergovernmental Panel on Climate Change on the warming effects of atmospheric carbon. Even scientific consensus can be dismissed as politics by other means. But how can a modern, technologically advanced nation fulfill its mandate to protect its citizens if it disavows its own capacity to produce public facts and public reason? Is the commitment to truth and trust in the public sphere irreparably damaged, or can steps be taken to restore it?

It is tempting to turn the clock back to January 2009, when the answer seemed both easy and overdue: restore science to its rightful place as humanity’s most rigorous and reliable pathway to truth. But today’s questions are not easy, nor are they new.

The current assault on public facts looks unprecedented, but moral panics about the reliability of public knowledge did not originate in the twenty-first century. What has shifted is the politics of concern, reflected in the focus of the panic, the actors who are disconcerted, and the discourse surrounding the breakdown. Setting the present chaos of “alternative facts” and “post-truth politics” within a longer history may help point the way from empty hand-wringing toward more constructive reflection and response.

Democratic states earned their legitimacy in part by demonstrating that they knew how to ensure public welfare—securing frontiers, improving public health, guarding against economic misery, and creating opportunities for social mobility and betterment. For this they needed science and expertise. As industries multiplied, corporations grew, and governments extended their regulatory oversight, it became less and less thinkable that power could be exercised without recourse to expert knowledge. But just as power is continually contested and forced to justify itself in democratic politics, so has power’s knowledge come under constant questioning. In the United States, in particular, political actors of all stripes pay lip service to the importance of science for policy; yet, specific scientific claims seldom pass unchallenged in any significant policy domain. Arguably, that long record of attack and counterattack has weakened the nation’s moral authority to produce what I call “serviceable truths”—that is, robust statements about the condition of the world, with enough buy-in from both science and society to serve as a basis for collective decisions.

The roots of discontent reach back at least to the New Deal, an era marked by the rise of regulation and centralized public knowledge. In that period, federal involvement to protect the economy against another Great Depression, together with progressive ideals of informed and reasoned government, led to an enormous expansion of the regulatory state and its policy-relevant expertise. The United States, of course, was not alone in experiencing the move to government by experts. In Europe, Max Weber, the first and possibly greatest theoretician of bureaucracy, observed a wide-ranging displacement of autocratic, monarchical power by the authority of the detached and objective expert. But the US evolution of expert-state relations took specific turns consistent with this nation’s pluralistic politics, adversarial administrative process, and suspicion of centralized authority.

The growth of the US administrative state drew calls for greater openness and accountability in its ways of knowing. Business and industry worried that the government’s claims of superior expertise together with its monopoly on information would hurt their interests, and they sought to ensure by law that they would have access to the expert practices of executive bodies. Their activism led to the Administrative Procedure Act of 1946, passed to remedy what the Senate Judiciary Committee identified in 1945 as “an important and far-reaching defect in the field of administrative law,” namely, “a simple lack of adequate public information concerning its substance and procedure.” Designed to make the administrative process more transparent, the act also created—through its provision for judicial review—a potent instrument for contesting public facts, an instrument that political interests of all stripes enthusiastically exploited in the decades after the law’s enactment. A pattern developed that many analysts have noted: US politics played out not only in the realm of law, as a fascinated Alexis de Tocqueville had observed in 1831, but also in recurrent, rancorous disputes over scientific claims.

The expansion of social regulation in the 1970s gave new impetus to the private sector’s disenchantment with public fact making, eliciting repeated charges of “bad” and even “junk” science. Again, public authorities bore the brunt of these attacks. This was the period in which an electorate newly sensitized to health, safety, and environmental hazards demanded, and received, protection from previously unseen and understudied threats: radiation, airborne toxic emissions, chemicals in food and water, untested drugs, workplace hazards, and leaking landfills. A barrage of progressive legislation sought to protect the subjects of a postindustrial, postmaterial society still exposed to the all-too-material hazards of older, dirtier industrial processes. These laws changed the US social contract for science, demanding expensive information as a precondition for doing many kinds of business, and also enabling regulatory agencies to fill gaps in public knowledge. Above all, agencies gained authority to interpret existing information for policy purposes with the aid of a growing “fifth branch” of scientific advisers. Convened for the express purpose of helping agencies to carry out their statutory mandates, these bodies often found themselves on the front line of political combat, whether for having over-read the evidence in favor of regulation or, less frequently, for granting too much latitude to industry’s antiregulatory claims.

From the late 1970s onward, US industries continually accused federal agencies and their expert advisers of allowing politics to contaminate science, and with the election of Ronald Reagan in 1980 they found a willing ally in the White House. In the early years of the Reagan administration, charges of “bad science” crystallized into a specific bid for a single, central agency to carry out risk assessments for all federal regulatory agencies, as well as a more general call for peer review of the government’s scientific findings by scientists not too closely associated with the state. A seminal report from the National Research Council in 1983, Risk Assessment in the Federal Government: Managing the Process, beat back the demand for centralization but did its own influential boundary work by labeling risk assessment a “science.” Decades of research since then have demonstrated that risk assessment not only is, but must be, a complex exercise blending accepted and plausibly surmised facts with judgments conditioned by public values and purposes. Nonetheless, the label “scientific risk assessment” endures, separated in regulators’ minds from “risk management,” the process that explicitly translates scientific findings into social policy.

The science label, however, proved to be a lightning rod for an increasingly partisan politics. It left agency decision makers vulnerable to claims that their risk assessments had deviated from a baseline of imagined scientific purity. Peer review, the tried and true method by which science maintains its hold on objectivity, drew special scrutiny as more political actors recognized it as a space for flexible judgment. In the administration of George W. Bush, the Office of Management and Budget attempted to take control of the process of appointing regulatory peer reviewers but was deterred by an outcry from leading scientific bodies. Meanwhile, the Democratic opposition excoriated the Bush administration for waging what the science journalist Chris Mooney colorfully named The Republican War on Science.

By the 1990s, the uproar surrounding public knowledge-making reached another crescendo around the use of science in courts. Prominent scientists and legal analysts teamed up with industry in decrying the courts’ alleged receptivity to what they considered junk science. They lobbied to introduce more “independent” expertise (that is, experts nominated by the courts rather than selected by the parties) into a process traditionally dominated by adversarial interests. The Supreme Court took note and in 1993 issued a ruling, Daubert v. Merrell Dow Pharmaceuticals, Inc., asking judges to play a more assertive part in prescreening expert testimony. Daubert stopped short of demanding peer review and publication as necessary conditions for introducing scientific testimony. But flying in the face of findings from the sociology of knowledge, the decision reaffirmed the notion that criteria for determining the reliability of proffered testimony exist outside and independent of case-specific proceedings involving particular domains of science and technology. Although increasing judges’ power to screen scientific evidence, Daubert in this sense undercut judicial sensitivity to the contexts in which evidence is generated—or not generated, often to the detriment of economically and socially disadvantaged plaintiffs.

Through these decades of contestation over public knowledge, a rhetorical constant has been the invocation of science, along with its penumbra of facts and truth, to both legitimatize and delegitimatize public action. Notably absent from US policy discourse, however, is an espousal of the “precautionary principle,” a cornerstone of European regulatory policy designed to deal with situations in which policies must be adopted without achieving complete certainty on the facts. As described in a European Union communication of 2000 explaining how the term should be interpreted and implemented, “the precautionary principle is neither a politicisation of science or the acceptance of zero-risk but … it provides a basis for action when science is unable to give a clear answer.”

The important issue here is not whether the principle always translates into unambiguous policy, nor whether European policy makers have been sincere or consistent in applying it, nor even whether Europe’s precautionary approach produces more or less stringent regulation than the US’s risk-based choices. Rather, the relevant point for reliable public knowledge is the very recognition of an intermediate analytic position between “politicization” and “zero risk”—a position usefully occupied by the notion of precaution. Worth noting, too, is the convergence between the European Union’s articulation of the precautionary principle and the idea of “serviceable truth,” defined in my 1990 book The Fifth Branch as “a state of knowledge that satisfies tests of scientific acceptability and supports reasoned decision-making, but also assures those exposed to risk that their interests have not been sacrificed on the altar of an impossible scientific certainty.” That book, a detailed study of peer review in the Environmental Protection Agency and the Food and Drug Administration, concluded that regulators should aim to ground their decisions in serviceable truths when science pure and simple does not offer precise guidance.

Let us fast-forward, then, to the “post-truth” present. The shoe in important respects is on the other foot, with liberals, left-leaning intellectuals, and Democrats, rather than conservatives, corporations, and Republicans, complaining of politics distorting science and propagating, in presidential spokeswoman Kellyanne Conway’s unforgettable phrase, “alternative facts.” How did “truth” become the property of the political left when once it seemed the rhetorical staple of the political right, and how are today’s cries of outrage at governmental deviation from science, expertise, and facts different from the charges from the right in earlier decades?

It is not far-fetched to suggest that it is liberals who now have lost sight of the social context of truth claims. The great gains made by science and technology in recent decades have led to complacency about science providing the right answers to big social problems. Climate change with its urgent messages for humankind is the most prominent example, but scientists insist equally on the primacy of facts in any number of situations where science has provided support for increased intervention into natural processes, such as the safety of nuclear power, vaccination against childhood disease, and genetic modification of plants. In time, we are told, even gene editing of future humans will become risk-free, just as autonomous vehicles will carry passive human riders safely along city streets. Lost from view is the fact that people bring other values and concerns to each and every one of these debates, such as whose definition of risk or benefit frames the public debate, whose knowledge counts, and who gains or loses in implementing the solutions that science advocates.

To address the current retreat from reason—and indeed to restore confidence that “facts” and “truth” can be reclaimed in the public sphere—we need a discourse less crude than the stark binaries of good/bad, true/false, or science/antiscience. That oversimplification, we have seen, only augments political polarization and possibly yields unfair advantage to those in possession of the political megaphones of the moment. We need a discourse more attuned to findings from the history, sociology, and politics of knowledge that truth in the public domain is not simply out there, ready to be pulled into service like the magician’s rabbit from a hat. On the contrary, in democratic societies, public truths are precious collective achievements, arrived at just as good laws are, through slow sifting of alternative interpretations based on careful observation and argument and painstaking deliberation among trustworthy experts.

In good processes of public fact-making, judgment cannot be set side, nor facts wholly disentangled from values. The durability of public facts, accepted by citizens as “self-evident” truths, depends not on nature alone but on the procedural values of fairness, transparency, criticism, and appeal in the fact-finding process. These virtues, as the sociologist Robert K. Merton noted as long ago as 1942, are built into the ethos of science. How else, after all, did modern Western societies repudiate earlier structures of class, race, gender, religious, or ethnic inequality than by letting in the skeptical voices of the underrepresented? It is when ruling institutions bypass the virtues of openness and critique that public truthfulness suffers, yielding to what the comedian Stephen Colbert called “truthiness,” the shallow pretense of truth, or what the Israeli political scientist Yaron Ezrahi calls “out-formations,” baseless claims replacing reliable, institutionally certified information. That short-circuiting of democratic process is what happened when the governments of Tony Blair and George W. Bush disastrously claimed to have evidence of weapons of mass destruction in Iraq. A cavalier disregard for process, over and above the blatancy of lying, may similarly deal the harshest blows to the credibility of the Trump administration.

Public truths cannot be dictated—neither by a pure, all-knowing science nor unilaterally from the throne of power. Science and democracy, at their best, are modest enterprises because both are mistrustful of their own authority. Each gains by making its doubts explicit. This does not mean that the search for closure in either science or politics must be dismissed as unattainable. It does mean that we must ask and insist on good answers to questions about the procedures and practices that undergird both kinds of authority claims. For assertions of public knowledge, the following questions then seem indispensable:

If those questions can be raised and discussed, even if not resolved to everyone’s satisfaction, then factual disagreements retreat into the background and confidence builds that ours is indeed a government of reason. For those who are not satisfied, the possibility remains open that one can return some other day, with more persuasive data, and hope the wheel of knowledge will turn in synchrony with the arc of justice. In the end, what assures a polity that knowledge is justly coupled to power is not the assertion that science knows best, but the conviction that science itself has been subjected to norms of good government.

Clean Energy Mind Games

The world needs clean energy. Clean, as in doesn’t emit greenhouse gasses, particularly carbon dioxide, that can drive climate change. And we need plenty of it within the next couple of decades, nearly 50% more energy by 2040 than is currently produced, as billions of people rise out of poverty and expect the same resources the developed world already enjoys.

So it is encouraging that governments around the world are adopting policies that encourage clean energy production and large corporations are converting to low-carbon-emission energy supplies. But these steps, although meaningful, are not nearly enough, because government policies overwhelmingly favor some clean energy sources—renewables such as solar, wind, hydro, and biofuels—over other clean energy sources, particular nuclear power. Yet most energy experts agree that renewables can’t supply as much power as we need, as quickly as we need it. It’s therefore worth trying to understand why government policies favor some forms of low-carbon energy over others, because the battle over what sort of clean energy counts as clean leaves us fighting climate change with one hand tied behind our back.

In the United States, 29 states have adopted renewable portfolio standards requiring that a percentage of the electricity a utility sells must come from wind, solar, hydro, and in some cases biofuels, all of which need economic support from government policy because they can’t compete against cheaper fossil fuels, especially natural gas. But whereas renewables receive significant direct economic support, nuclear energy receives far less. Only two states—New York and Illinois—provide financial assistance that helps nuclear compete economically, and in both cases the support was adopted less as a clean air measure and more to preserve high-paying jobs that would be lost if nuclear plants in those states closed. Massachusetts, Pennsylvania, Connecticut, and Ohio are also considering economic support for nuclear, but the overall picture remains clear. State government subsidies for clean energy overwhelmingly favor renewables.

At the federal level, in 2016 renewable sources of energy received 114 times more support in preferential taxes than nuclear power per terawatt of power generated, according to the Congressional Budget Office.

Between state and federal programs, the case is overwhelmingly clear; some forms of clean energy get vastly more support to help them compete in the energy marketplace than others. And nuclear, which could supply huge amounts of zero emission energy that could help the United States reduce its greenhouse gas emissions, is being significantly disadvantaged by government policy. Selective policies to support some forms of clean energy more than others are dramatically limiting the nation’s ability to address the problem that clean energy is supposed to help solve.

Clean machines?

Why this inequality? It can’t be economics. Nuclear power is so expensive that it can’t compete with fossil fuels, but neither can renewables once you factor in the necessary cost of backup power to compensate for the periods when the sun isn’t shining or the wind isn’t blowing. It can’t be that nuclear isn’t needed because renewables can provide all the energy we need as urgently as we need it. The leading program in the world trying to replace nuclear with renewables, the massive Energiewende program in Germany, has made great progress, but not enough. Renewables haven’t been able to replace all the energy lost due to the shutdown of nuclear power in Germany, so new coal plants are being built, energy-intensive sectors of the economy have been exempted from the program, and the nation is not even close to being on track to meet its greenhouse-gas-emissions goals.

Preferential government support for renewables over nuclear has more to do with a selectively applied interpretation of what “clean” means. Nuclear is held to a different standard than renewables. All sources of electricity have environmental and human health costs, considered across their full life cycle. But as a senior legislative researcher in Massachusetts told me as he was helping write the state’s new energy law, which favored renewables but not nuclear, “Nuclear may not emit greenhouse gases, but it’s got other problems …. Radiation is dangerous. Nuclear is a political non-starter. People are afraid of it.”

What is behind such fear? In a practical sense, the fear can be traced back to the shadow of the mushroom clouds of Hiroshima and Nagasaki and the ensuing Cold War testing of nuclear weapons. Indeed, the first global protest movement, Ban the Bomb, focused on the threat of cancer from radioactive fallout produced by atmospheric testing of nuclear weapons. This fear of radiation, in turn, gave rise to the modern environmental movement itself and remains a cornerstone of what it means to be an environmentalist. This has remained the case even as many of the early “facts” about the hazards of nuclear radiation have proven unfounded.

So why have the fears persisted? Several areas of psychological research offer insights. Research on risk perception conducted by Paul Slovic, Baruch Fischhoff, Sarah Lichtenstein, and others has found that our fears sometimes don’t match the facts because, as Slovic has written, “risk is a feeling,” not a dispassionate analysis of the facts alone. Their work has identified more than a dozen specific psychological qualities that magnify fear of some threats and minimize fear of others. Several of these characteristics intensify our fear of nuclear power:

In addition, how the human brain works plays a role. Research on cognition by Daniel Kahneman and others has found that the brain often relies on heuristics and biases—mental shortcuts—to make quick judgments, rather than taking the time to gather more information and carefully think things through. We jump to less-than-fully-informed conclusions. And once we’ve made up our minds about something—a threat or anything else—our brains tend to stick with what we believe rather than making the additional mental effort of keeping an open mind and constantly analyzing things from new perspectives. We tend to stick with the conclusions to which we’ve jumped.

Culture clubs

Then there is the matter of how humans respond to group influences, as reflected in the psychological phenomenon called cultural cognition. Research by Dan Kahan of Yale University and others, based on anthropological theories developed by the late Mary Douglas, has found that we shape our views so they agree with the views of the group or groups with which we most closely identify. Agreeing with and promoting our group’s views demonstrates loyalty, which earns us status as a member in good standing, worthy of our group’s support. This is vital for nothing less than our sense of safety, since as social animals we instinctively depend on our group—our tribe—for protection.

No wonder, then, that the debate about what kind of energy should receive government support produces such heated and visceral arguments. They aren’t disagreements about the facts alone. They are tests of—challenges to—tribal identities that we instinctively protect because they are critical to how safe we feel. As Steven Hamburg, chief scientist at the Environmental Defense Fund, has noted, “If you protested Seabrook or Shoreham or Indian Point, being asked to take a second look at nuclear is hard.” Views that conflict with your group’s basic beliefs, Hamburg says, pose “an existential threat to the world and yourself.

Cultural cognition—interpreting the facts so we can support our group’s beliefs—also helps explain why opponents of nuclear power deny the robust evidence that nuclear radiation is nowhere near as dangerous as most people believe. A major study of atomic bomb survivors, called the Life Span Study, has found that of the nearly 90,000 people who were within 10 kilometers of the hypocenter of those blasts—thousands of whom received frighteningly high initial doses of radiation and continued exposure for months because of contaminated food, water, and air—the lifetime cancer death toll rose by only 0.3% compared with the cancer mortality among 20,000 non-exposed Japanese who have been followed as a control group. (The study was conducted initially by the US Atomic Bomb Casualty Commission in cooperation with the Japanese National Institute of Health, and it continues under the Radiation Effects Research Foundation.) The study has also found that contrary to public belief (and a swarm of sci-fi mutant movies), exposure to even high doses of ionizing radiation has caused no multigenerational genetic damage passed down from the atomic bomb survivors to their children.

Based on this robust evidence—the Life Span Study has been going on for nearly 70 years—experts say that the radiation released by the nuclear accidents in Chernobyl, Ukraine, and Fukushima, Japan, will do relatively minimal damage to human or environmental health. In the case of Fukushima, in fact, the World Health Organization predicts that because the doses were so low, there will be no increase in the rate of any radiation-related diseases beyond the normal rates in the general population.

But nuclear opponents steadfastly deny these findings. They consistently portray nuclear accidents as doing much more harm than neutral experts have found. They consistently overstate the health risk from even the tiniest problems at any nuclear power facility. This is not unlike the science denial of people who reject the evidence of anthropogenic climate change. The phenomenon is the same. It is cultural cognition working to produce a view of the evidence that, though honestly held, simply conflicts with the current state of established scientific knowledge.

Emotional rescue

These psychological realities paint a dark view for clean energy policy making. Opposition to nuclear energy is instinctive, important to the sense of safety and identity for many opponents, and therefore difficult to overcome. As persistent opposition to nuclear power by many environmental groups demonstrates, not even an appeal to concern about the global environmental threat of climate change is enough to reverse deeply held beliefs. The fear of being disloyal to the tribe and then being ostracized is a visceral, personal, and powerful barrier to revisiting the sources of one’s opposition to nuclear energy.

The fear of being disloyal to the tribe and getting kicked out and losing the sense of safety that belonging to a group provides is a visceral, personal, and powerful barrier to revisiting the sources of one’s opposition to nuclear energy.

Nonetheless, there are at least some signs of possible change. A few environmental advocates and organizations are starting to accept that nuclear should play a part in mitigating climate change, and some government bodies are increasing support for nuclear as a source of clean energy. New technologies that offer smaller and safer reactors are on the horizon, and a proposed Massachusetts policy to meet the requirements of the state’s Global Warming Solutions Act, for example, includes economic incentives for them. Yet this policy would not extend to existing plants. One such facility in Massachusetts is about to close, with the likely result that the state’s reliance on energy generated from fossil fuels will rise, as has occurred in Vermont following closure of a nuclear plant there that was producing a third of all the electricity used in the state.

But given current conditions, such change is likely to be gradual at best. A shift to new nuclear technologies will take time, decades for most of them. Yet we may not have time. Demand for energy is rising rapidly. The International Energy Agency estimates that the world will need a staggering 78,000 terawatt-hours more power in 20 years, three times the amount consumed in the United States in a year. Global greenhouse gas emissions are rising, too. The impact that humans are having on the climate system increases daily, and the radiative forcing effects of emissions released today will persist for hundreds of years. We need massive amounts of clean energy, and we need it soon. Delay causes real harm.

We need to honestly recognize the danger we face because our instinctive cognitive system limits our ability to consider as thoroughly and openly as possible which choices will afford the greatest protection.

The values-based conflict over what kind of clean energy should count as clean contributes to those delays. And that fight is the result of the inherently emotional nature of human cognition. It is pointless to label this as right or wrong, smart or dumb, rational or irrational. It’s just who we are, how our brains work.

But recognizing that the inherently affective nature of how we think sometimes puts us at risk might challenge us to think more carefully. We need to honestly recognize the danger we face because our instinctive cognitive system limits our ability to consider as thoroughly and openly as possible which choices will afford the greatest protection. Sometimes the choices we make, though they may feel right, don’t do us the most good. If energy policy makers can recognize that, and understand just how our cognitive systems produce views that don’t always align with our most urgent priorities, they might be inspired to look for ways to overcome these limitations and do more critical thinking in search of policies to combat climate change that will have the greatest impact.

David Ropeik, an instructor in the Environmental Management Program of the Harvard Extension School, is the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts (McGraw Hill, 2012).

Histories of an Innovation Icon

Sharon Weinberger’s new book, The Imagineers of War, provides a history of the Defense Advanced Research Projects Agency (DARPA), a technology research and development (R&D) agency at the Department of Defense. Her book is remarkably similar in structure and coverage to one published 18 months earlier by Annie Jacobsen, titled The Pentagon’s Brain. Both books dwell on DARPA’s early history, particularly the agency’s founding after the Soviet Union’s launch of Sputnik and the agency’s decadelong involvement in Vietnam. And both omit or skim some critical periods in DARPA’s past that are necessary for understanding the organization’s role in innovation today.

The Imagineers of War

DARPA was founded in 1958. (At that time it was called ARPA—“Defense” wasn’t added until 1972; for consistency, I will refer to the agency as DARPA.) The late 1950s were a time of enormous concern about the possibility of thermonuclear Armageddon between the United States and the Soviet Union, as well as about the threat the Soviet’s conventional military posed to Europe. President Dwight Eisenhower feared that the United States was falling behind the Soviets. These concerns were exacerbated when the Soviets launched a small satellite—Sputnik—that, though itself harmless, signaled We can hit you. The US military technology development apparatus looked unresponsive. Eisenhower was exasperated with inter-service rivalries that thwarted effective R&D, particularly into new realms such as outer space, which no military service could legitimately call its own. He thus was open to an alternative approach and agreed with his advisers to create DARPA as a new agency reporting directly to the secretary of defense.

The origins of DARPA are better depicted by Weinberger. It was founded with three presidential initiatives: to get the United States into space, to detect Soviet nuclear tests, and to develop missile defenses. Eisenhower had made it clear that space was to be the realm of a civilian agency, what became the National Aeronautics and Space Administration (NASA). Herbert York, who was charged with overseeing the Defense Department’s technology strategy as the first Director of Defense Research and Engineering, saw DARPA’s job as initiating space programs until NASA was established, whereas DARPA’s first director, Roy Johnson, sought to maintain a space role for DARPA.

When the space program transitioned to NASA, DARPA’s main remaining projects were the Vela programs, focused on nuclear test detection, and the Defender missile defense program. Weinberger describes the Vela Uniform program on seismic detection of Soviet nuclear tests, noting that this revolutionized the field of seismology. Importantly, Vela provided the technical means to support the limited nuclear test ban and facilitated the US and later Soviet moratoriums on underground nuclear testing. The Vela Hotel satellite program, which was first launched in 1963 and proved successful almost immediately in detecting above-ground nuclear explosions, is passingly mentioned by Weinberger. Despite Vela and Defender representing the preponderance of DARPA’s programs in the 1960s, Jacobsen gives them scant mention.

The Pentagon's Brain

Both books describe the earliest missile defense program, Operation Argus, in some detail. Based on the theory of Nicholas Christofilos, who worked for York at the Lawrence Radiation Laboratory, the notion was to create a radiation belt in the extreme upper regions of the Earth’s atmosphere that would disable incoming missiles. The idea of exploding numerous nuclear weapons on the fringes of space to test the “Christofilos effect” seems outlandish today, but the fact that this project was pursued illuminates the national security environment of those times and DARPA’s willingness to undertake risky projects.

Weinberger notes that the Defender missile defense program, though consuming half of DARPA’s budget in the 1960s, was seen as a “god-awful mess” that involved “loony” ideas. The agency’s problem was coming up with approaches that were significantly better than current ones, but not completely unrealistic. Indeed, out of this came DARPA programs such as Arpat, a complicated missile interception scheme that the head of the program himself labeled only “kind of nutty.” These programs, after expending tens of millions of dollars, provided little technological payoff—despite generally being considered good science.

Both authors spend the bulk of their books on Project Agile, a highly classified counterinsurgency program deployed in Vietnam and, eventually, other countries in Southeast Asia. The program began in 1961 under William Godel, deputy director of DARPA, whose role in the early years is little understood and not well-documented in the agency’s archives. (However, much of the history of Godel and Agile was presented in detail in The Advanced Research Projects Agency, 1958-1974, by Richard J. Barber Associates, published in 1975.)

Each of these books presents compelling information about Godel’s activities, indicating a program run amok and with little oversight. Out of Project Agile came such debacles as the “strategic hamlets” program of rural pacification; fallacious assessments of the Vietnamese people under the guise of social science; and the egregious use of chemical defoliants, particularly the infamous Agent Orange. In retrospect, much of Agile was naïve, poorly managed, and rife with amateurism and even corruption. As Weinberger puts it, Godel “was running the Agile office as his own covert operations shop.”

As a fascinating and excruciating example of errant public policy, these deep dives into the Godel episode are astounding. They portray an overzealous, misguided operator who hijacked a technological agency to perpetrate an outlandish and failed program of social engineering at a massive scale. The episode illuminates the fact that there existed at least two DARPAs with little in common: the “strategic” DARPA, pursuing missile defense and nuclear test detection; and the “operational” DARPA, in which programs such as Agile attempted to bring technology into a combat zone. The latter was hardly scientific and raised, as Weinberger states, a battle over competing visions over the agency’s future. Neither book draws out the relevant lessons, however.

Given DARPA’s fundamental impact on what was to become computer science, both authors give information technologies short shrift—perhaps mindful that this history has been well told in other books, most notably Mitchell Waldrop’s The Dream Machine (2001). Moreover, both Weinberger and Jacobsen focus narrowly on the ARPANET, the precursor to today’s Internet, giving scant attention to the broader, increasingly coherent program begun as the Information Processing Techniques Office under J. C. R. Licklider. His concept of “man-computer symbiosis” is not captured in either book. Waldrop and others have elucidated how DARPA fostered a multipronged development of the technologies underlying the transformation of information processing from clunky, inaccessible machines to the ubiquitous network of interactive and personal computing capabilities. These two histories offer little on this transformation—perhaps the most significant of DARPA’s impacts—which continues today in DARPA’s pursuit of artificial intelligence, robotics, and cognitive computing.

Weinberger documents how DARPA was a troubled agency in the late 1960s and early 1970s, a victim of the Vietnam malaise and resource cutbacks that affected all of the Defense Department, as well as the fact that the Defender and Vela programs had essentially run their course. In 1965, Deputy Secretary of Defense Cyrus Vance advocated abolishing the agency. Crucial to understanding how DARPA evolved organizationally and programmatically from this are the actions of John S. Foster, who became the Defense Department’s Director of Defense Research and Engineering in 1965 and remained for eight years. Weinberger gives this crucial period some mention, but fast-forwards to the post-1970 resolution in which DARPA jettisoned the Agile program and moved Defender to the Army. There is no mention of this history by Jacobsen.

Foster was unhappy with how Agile was being conducted and dismayed with DARPA’s “toleration of an ‘academic’ atmosphere, undue pretensions to independence … [and] management deficiencies,” as the 1975 Barber report noted. Foster forced major changes and became deeply involved in DARPA programs and budgets. A new DARPA director appointed in 1967, Eberhart Rechtin, saw himself as Foster’s man, assigned to clean up and provide institutional discipline while reinvigorating DARPA by emphasizing the transition of successful projects to the military. He saw DARPA as a research agency—not development—that took risks that the military services would not, and he sought new directions with new programs. Reshaping DARPA continued under Rechtin’s successor, Steven Lukasik. Whereas Rechtin was relatively supportive of Agile but disenchanted with its focus and results, Lukasik saw it as “an embarrassment” and closed it down.

Weinberger capably describes DARPA’s post-Vietnam transition, prompted by the refocusing of the White House and Office of the Secretary of Defense on the Soviet threat. This entailed searching for new alternatives to the use of tactical nuclear weapons to defend Europe against Soviet attack. DARPA played a lead role in supporting studies on how to respond to such an attack and developed the underlying capabilities required to achieve these new alternatives. Weinberger succinctly presents the “systems of systems” concept for countering the Soviet Union’s numerical superiority with advanced technology, what Under Secretary of Defense William J. Perry and Secretary of Defense Harold Brown subsequently would call the “Offset Strategy.”

To implement this concept, during the mid-1970s into the 1980s DARPA launched transformative programs on stealth technologies, standoff precision strike capabilities (the ability to accurately destroy targets from a distance), and tactical surveillance via unmanned aerial vehicles. Weinberger provides a crisp chapter on the agency’s Have Blue stealth program, including the role that Malcolm Currie, Foster’s successor as the Director of Defense Research and Engineering, played in persuading the Air Force to commit to a prototype program. Missing from her account, though, is the transition from this “proof of principle” prototype to the F-117A, which was accomplished in just four years by Lockheed and the Air Force, with extraordinary top-level management oversight by Under Secretary Perry.

Weinberger gives much briefer mention to DARPA’s efforts to develop unmanned aerial vehicles—a mere two pages—and there is no mention of the Assault Breaker program to develop standoff precision strike capabilities. These are major omissions, as these were large programs with significant consequences for future defense technology capabilities. Jacobsen covers these programs even less, briefly describing stealth and then jumping to the F-117A stealth aircraft and JSTARS surveillance plane used in the first Gulf War more than a decade later. There is a brief mention of Assault Breaker, but no discussion of the program itself, and a few words mentioning the tactical unmanned aerial vehicles, but nothing on how they were developed by and transitioned from DARPA. Given that these programs are often touted as evidence of DARPA’s impact in transforming tactical warfare, this lack of treatment is baffling.

In both books, it is as if much of the 1990s did not exist. In the decade following the end of the Cold War, DARPA struggled to redefine itself and its programs. Moreover, the United States was in a budget crisis due, in part, to the vast defense spending of the 1980s. At the time, the White House, the Department of Defense, and DARPA were creating highly innovative and controversial programs aimed at bringing defense technology to bear on national economic competitiveness. This era of “dual-use” programs was a major redirection of DARPA under then-Secretary of Defense Perry—and was highly contentious due in part to questions about government investment in R&D that some contended could more effectively be performed by private industry. The midterm elections in 1994 produced a Republican majority in Congress that set out to end dual-use programs, creating a crisis of direction for DARPA during the remaining six years of the Clinton administration. None of this is covered in either book.

Another major gap in both books is the lack of discussion of the Future Combat Systems program, which DARPA conducted in partnership with the Army starting in the late 1990s. Under this program, tanks were to be replaced with a networked system of distributed robots and sensors—but the program was a massively expensive debacle. It was cancelled in 2009 by Secretary of Defense William Gates, after over $18 billion had been expended with nothing to show for it. Weinberger acknowledges this in a single sentence. There is no mention of the program by Jacobsen.

For most of the 2000s, the DARPA director was Anthony Tether, formerly director of the agency’s Strategic Technology Office from 1982 to 1986. The terror attacks of September 11 occurred within months of Tether becoming director, enmeshing DARPA in the “War on Terror.” The Total Information Awareness program, run by former Admiral John Poindexter (a controversial selection due to his role in the Iran Contra affair), is covered in both books. These tell the story of Tether and Poindexter making serious misjudgments that resulted in public outcry over the program on the grounds of its potential for invading privacy. The controversy led to Poindexter’s resignation and the program’s termination. But as both books document, the technologies that DARPA developed for deep data-mining were transferred to the intelligence agencies—particularly the National Security Agency.

Weinberger then segues to DARPA’s Grand Challenge, an incentive prize competition for demonstrating autonomous self-driving automobile technology. In the first Challenge, no vehicles came even close to completing the off-road course. Based on new technology and experience from the first race, five vehicles finished the second Challenge, pushing autonomous vehicles closer to reality. These and subsequent Challenges were successful in creating interest and incentivizing teams of researchers, often university-based, to demonstrate integrated capabilities. Importantly, the Challenges built on DARPA’s technology-focused programs in robotics, sensing, autonomy, communications, and energy storage as underlying enabling technologies.

Neither book provides much coverage of DARPA’s support for the wars in Iraq and Afghanistan under Tether. Weinberger does discuss the Nexus 7 data-analysis effort started by Tether’s successor, Regina Dugan. This program embedded social science “insurgency experts” and information technology scientists working on crowd-sourcing and social networking into Afghanistan—the first time since Vietnam that DARPA ventured into an operational area. Nexus 7 sought to use “reality mining” for “computational counterinsurgency.” After a period of months with more than 100 personnel in and out of country, the results of Nexus 7 in Afghanistan are debatable: although arguably a technical success, its operational impacts were limited. DARPA comes out of this recent experience facing questions similar to those raised by its earlier engagement in Vietnam: Should the agency develop social science and information technology for “population-centric” counterinsurgency? Should it focus on such immediate wartime efforts at all, or should its role remain long-term research of the kind that in the past has produced game-changing technologies? These types of questions are barely raised in either book.

Jacobsen ends The Pentagon’s Brain with roughly a hundred pages that largely veer off from DARPA’s history. She extensively discusses the improvised explosive devices that slaughtered US troops in Iraq and Afghanistan, but DARPA’s role in dealing with these was marginal. From there, she forays into technology and security issues only loosely connected to DARPA, including an aside on terror attacks and several pages on the Human Terrain System (a failed Army social science-based counterinsurgency program that had no DARPA involvement). Her final 50 pages are essentially speculation as to what the agency may currently be working on and what technologies it might advance in the future in areas such as robotics, autonomy, artificial intelligence, human-machine systems, and biological regeneration. She posits that these technologies may be leading to a new adversary—the autonomous “killer robot”—and leaves it at that.

Weinberger finishes The Imagineers of War by asking whether DARPA has devolved into a narrowly focused technology development agency, as opposed to one that takes on “truly high risk” projects. She ends with a vignette about the agency’s neuroscience programs aimed to modulate the human brain with neural implants to treat illness or injury. She concludes that its “neuroscience work could transform the world by revolutionizing medicine, and it could lead to weapons that change the way we fight future wars. Whether that world will be a better place is unclear.”

In sum, neither book is a comprehensive history of DARPA. Rather, they illuminate selective aspects of the agency’s past while leaving out crucial aspects of its history. Both lack a clear organizing principle, though Weinberger’s treatment is more thorough and carefully documented, based on numerous interviews, including most of the DARPA directors and a large number of office directors and program managers.

DARPA has dramatically affected many areas of defense capabilities. It has also produced much broader, revolutionary economic and societal advances in information technologies, microelectronics, materials, and other areas. Although Agile and its offshoots—which take up disproportionately large portions of both books—were ill-founded failures, they represented but one element of DARPA, and one that became decreasingly relevant over time to how the agency was defined and what it did. Largely missing from both books is what DARPA did over its 60-year existence to garner its reputation as an innovation icon. Missing is any discussion of its seminal technology work in microelectronics, cognitive computing, and synthetic biology. A more coherent and complete treatment that elucidates how the agency changed over time in various historical and organizational contexts is crucial to understanding today’s DARPA and its ongoing mission to demonstrate what the future could be.

Forum – Summer 2017

Climate engineering

The articles in the Spring 2017 Issues by David W. Keith, “Toward a Responsible Solar Geoengineering Research Program,” and Jane C. S. Long, “Coordinated Action against Climate Change: A New World Symphony,” provide informative views into several points of current debate on climate engineering research, its management, and governance. The authors agree on several key points. They agree that it is urgent to expand research on climate engineering interventions; that research is needed on both carbon-cycle and solar options; that research must address both scientific and engineering questions; that the agenda should be driven by societal need rather than investigator curiosity; and that research should target interventions that are plausible candidates for actual use, not idealized scenarios. They also agree that research must vigorously pursue two competing aims: to identify and develop interventions that are as effective and safe as possible, and to aggressively scrutinize these to identify potential weaknesses or risks.

Their main disagreement concerns how to organize research on the two types of climate engineering: carbon-cycle and solar methods. Long argues that they should be combined, because the two approaches must be evaluated, compared, and decided jointly, together with mitigation and adaptation, to craft an effective strategic climate response. Keith argues that they should be separated, because of large differences in the bodies of scientific knowledge and technology on which they draw; the nature and distribution (over space and time) of their potential benefits, costs, and risks; and the challenges they pose for policy and governance.

A first step toward clarifying this disagreement is to note that the authors emphasize different elements of policy-making processes. Keith is mainly concerned with designing research programs. His programs are not purely scientific in their motivation and focus, in that they aim to develop and test technologies that can contribute to solving a societal problem. But they are well enough separated from policy decisions, and from the comparative assessment of capabilities, risks, and tradeoffs needed to inform decisions, that their management and funding are best optimized for each type of climate engineering separately. Long is mainly concerned with assessment and decision making. She argues that effective climate policy making must strategically consider and compare all response types, and that assessments, scenarios, and research programs must therefore also be strategically integrated if they are to usefully inform policy decisions. The authors thus agree on the need for integration of carbon-cycle and solar methods in assessment, scenarios, and policy making, but diverge on what this implies for the design, funding, and management of research programs: separate programs for carbon-cycle and solar methods, or combined?

This question turns on whether achieving successful integration in assessment, scenarios, and policy making requires integration in research program management and funding. In my view, such dependency could arise in three ways. First, integrated research would be favored if a coherent and defensible research program mission cannot be defined at the level of one response type, but only at some higher level of aggregation: as Long points out, “make solar geoengineering work” is not a suitable mission statement for a research program. Second, integration would be favored if effective assessment requires strong control over research management decisions, including allocation of resources between carbon-cycle and solar interventions. Finally, integration would be favored if research governance needs are driven less by differences in the opportunity and risk profile of different responses, and more by aggregate public or political views of climate engineering that do not clearly distinguish the two types: in this case, integration might be required as a matter of political risk management.

Edward A. Parson

Dan and Rae Emmett Professor of Environmental Law

Faculty Co-Director, Emmett Center on Climate Change and the Environment

UCLA School of Law

Jane Long makes several important points. Among them is that geoengineering research should not have as its mission the deployment of geoengineering concepts. She cogently argues that “The goal for climate intervention research must be to understand the potential efficacy, advisability, and practicality of various concepts in the context of mitigation and adaptation.” David Keith makes a similar point and provides two guiding principles: that research on solar radiation management should be part of a broader climate research portfolio on mitigation and adaptation action, and that research should be linked to governance and policy work.

We generally think of solar radiation management research in terms of small tests that can define particular parameters, such as the atmospheric residence time, transport, and fate of aerosol scattering particles. As both Long and Keith observe, these tests require thoughtful governance arrangements that may be difficult at present.

Twenty-six years ago there was a large-scale natural experiment in solar radiation management: the eruption of Mount Pinatubo in the Philippines that injected roughly 17 million tons of sulfur dioxide into the middle and lower stratosphere. Sulfate aerosols spread across the Pacific Ocean in a few weeks and around the globe within a year. Spectacular sunsets over the next two years were one indication of the stratosphere residence time of the aerosols. The event produced observed cooling in the Northern Hemisphere of 0.5 degrees to 0.6 degrees Centigrade, equivalent to a reduction in radiative forcing of perhaps 3 watts per square meter. Globally averaged cooling of approximately 0.3 degrees was observed.

Such natural experiments in stratospheric aerosol injection are infrequent. The eruption of Krakatau in 1883 produced a forcing of a little over 3 watts per square meter. There were three eruptions between Krakatau and Pinatubo that produced forcings of 1.5 to 2 watts per square meter and five additional ones of 0.5 to 1 watt per square meter. The average frequency was once every dozen years, although there was a long quiet period from about 1920 to 1963.

It seems both worthwhile and feasible to develop a program to learn from the next such eruption. Much was learned about scientific models from Pinatubo, but as the 2015 National Research Council report Climate Intervention: Reflecting Sunlight to Cool Earth stated, “More work is needed in characterizing these processes in nature (through measurements), and in modeling (through better model treatments and a careful comparison with observed features of aerosols and their precursor gases) before scientists can produce truly accurate models of stratospheric aerosols.” Understanding the chemical reactions, mixing, and particle formation after such an event can help characterize not only solar radiation management but also aerosol-forcing effects on climate. Global observations can help understand the consequences of solar radiation management on precipitation, plant productivity, and carbon uptake, among other effects.

The Climate Intervention report had a short section describing observational requirements for making better use of volcanoes as natural experiments. It points out that “our ability to monitor stratosphere aerosols has deteriorated since [Pinatubo], with the loss of the SAGE II and III satellite-borne instruments.” Both satellite systems and a deployable rapid-response observational task force (that would have other atmospheric science uses to occupy it between eruptions) are suggested.

The creation of an international program to learn from the next Pinatubo could jump-start both needed instrumentation and perhaps governance arrangements in a low-key way that could build trust and indicate whether governance of deliberate solar radiation management experimentation is feasible along the lines that Long and Keith describe.

Jay Apt

Co-Director, Carnegie Mellon Electricity Industry Center

Professor, Tepper School of Business and Department of Engineering & Public Policy

Carnegie Mellon University

David Keith issued a strong call for geoengineering research, echoing calls that I and others, including Keith, have made previously. I completely agree with him that mitigation (reducing emissions of greenhouse gases that cause global warming) should be society’s first reaction to the threat of human-caused climate change. I also agree that even if mitigation efforts are ramped up soon, they may be insufficient to prevent some dangerous impacts and society may be tempted to try to actually directly control the climate by producing a stratospheric aerosol cloud or brightening low marine clouds.

It will be a risk-risk decision that society may consider in the future: is the risk of doing nothing in terms of advertent climate control greater than the risk of attempting to cool the planet? To be able to make an informed decision, we need much more information about those risks, and thus we need a research program.

My only disagreement with Keith centers on his overall favor of eventual geoengineering implementation. I think the governance issues will be much more difficult than the examples he gives. Remember, we are talking about governing the climate of the only planet known to support life. Air traffic control or international banking do not have to be perfect, and small mistakes, even if very unfortunate for those affected, will not result in a global catastrophe. And how can we agree on how to set the planetary thermostat, with imperfect compensation for those who end up with a worse climate? How will we ever be able to attribute regional climate changes, either bad or good, to geoengineering, when natural variability is so large?

I support geoengineering research because we need to reduce the unknowns. We may discover large risks that we are unwilling to take, and the research may end up with enhanced cooperation toward rapid mitigation, with the realization that there is no safe “Plan B.”

But what about the “unknown unknowns,” as Donald Rumsfeld put it? Will the world ever be willing to take a chance on a complicated technical endeavor to control Earth’s climate, in the hope that there will be no bad surprises? Will we accept whiter skies and not being able to see the Milky Way as easily as now? Will we trust the militaries of the world to not use this new technology as a weapon? Can we live with more ultraviolet radiation reaching the surface due to ozone depletion caused by stratospheric particles?

Doing our best with the limited resources available, we are now trying to see if we can produce some new combination of materials, locations, and timing of injections of particles into the atmosphere that will produce a better climate for most. So far, we have not been successful. But it is early days, and we owe it to the world to do much more such research, while at the same time advocating for rapid reductions of emissions of greenhouse gases that are causing global warming. It will not cost much, and it is money that will be a wise investment of the governments of the world. We can’t wait.

Alan Robock

Distinguished Professor of Climate Science

Department of Environmental Sciences

Rutgers University

David Keith justifies his call for a large-scale international solar geoengineering field research enterprise in environmental justice terms. He argues that in light of the mounting evidence that emissions reductions alone may be insufficient to limit severe climate risks, beneficiaries of a research program to understand the risks and benefits of potentially deploying solar geoengineering technologies to rapidly cool Earth would include “the world’s most vulnerable people, who lack the resources to move or adapt” to rising sea levels and increasing extreme weather. Thus, the multiple “reasons for reluctance” that Keith acknowledges constrain support for solar geoengineering research must be weighed “against the evidence that solar geoengineering could avert harm to some of the world’s most vulnerable people.”

The problem is that such evidence is not established. The benefits and risks of any solar geoengineering program will be unevenly distributed across the world and nations might have widely divergent preferences for whether, when, how, and toward what ends solar geoengineering technologies should be deployed. Who would decide whether solar geoengineering is deployed to support the climate resilience goals of farmers in the Sahel or Bangladeshis if they conflict with, say, maintaining and expanding ice-free ports in the Russian Arctic?

According to Keith, a “responsible” solar geoengineering research program should “have an engineering core,” using atmospheric experiments to investigate detailed plausible operational scenarios for deployment. It would focus on assessing various researcher-determined measures of risk and effectiveness in achieving desired climate outcomes with results informing governance and policy developments.

This is not sufficient. Recent research suggests that in the absence of broader societal input and consent, even small-scale, low-risk field experiments will trigger concerns over the slippery slope to larger-scale, riskier experiments and deployment. Without the meaningful input and support from the climate vulnerable constituencies it is intended to benefit, a solar geoengineering field research program would lack much-needed legitimacy and risk significant opposition. A responsible research program needs to account for how climate-vulnerable nations and communities themselves might view the value of such a program and ensure that they are fully engaged in co-creating research and governance goals and objectives.

Thus, a responsible solar geoengineering research program should include several core elements. As a prerequisite, clear support for solar geoengineering research should be established from an international coalition of nations. This should include nations particularly vulnerable to climate change as well as high-carbon-emitting nations that are fully committed to ambitious emissions reductions. Research priorities should be explicitly codeveloped in collaboration with technical experts, social scientists, and civil society organizations from climate-vulnerable nations. Finally, an international research governance system must be designed with meaningful input from civil society to address concerns about transparency, liability, and justice.

Peter C. Frumhoff

Director of Science and Policy

Union of Concerned Scientists

Jennie C. Stephens

Dean’s Professor of Sustainability Science and Policy

Northeastern University

David Keith’s article gives rise to an interesting question about the utility of the label “responsible research” in the context of solar geoengineering. One of the central tenets of responsible research is that society, broadly defined, should have a meaningful stake in debating and modulating the direction of scientific research. In the case of research on solar geoengineering, with its inherently global impacts, the development of effective mechanisms for facilitating broad societal discussions about the desirability of this direction for research seems to be hugely important. However, this is not Keith’s focus. The notion of a genuine two-way dialogue around the desirability of research on this topic is absent: society features either as people meekly awaiting the benefits of techno-scientific intervention or as subjects to be enrolled in research projects to improve the effectiveness of the intervention.

Keith’s treatment of the so-called “slippery slope” concern (that research may generate momentum toward deployment through various mechanisms of lock-in) is particularly revealing of his understanding of the proper relationship between science and society, suggesting an expectation that research can ultimately bypass the need for societal debate and discussion. For example, he claims that a slippery slope is not a problem in itself if “research reveals that solar geoengineering works better and with less risk than we think.” But this assumes that research will be able to establish “once and for all” whether benefits outweigh risks. However, this is simply impossible: not only is there much that is likely to be unknowable about such interventions, but there are also many different disciplinary and social perspectives about what would constitute acceptable levels of risk, about the kinds of knowledge that would be necessary to answer such a question, and even about the meaning of risks and benefits themselves. Presuming that scientific research will be able to come up with a single answer and make these disagreements go away is quite simply unrealistic. There will always be multiple, contested answers to the question of whether geoengineering is on balance a good or bad idea—hence, the need for a genuinely responsible approach to research that incorporates a wide range of societal stakeholders in deciding if (not just how) this kind of research should go ahead.

By belittling concerns around the slippery slope as unfounded as long as the science shows us everything is all right, Keith reveals an overblown faith in science and a fairly dismissive attitude to the concerns that other people might bring to this debate. Despite nodding toward a number of other arguments against research, he quickly concludes that these “do not amount to a strong argument,” before promoting his own particular (and questionable) view of the benefits of research. Closing down the space for debate in this way would appear to limit the possibility for a really “responsible” attitude toward any potential research in this area.

Rose Cairns

Research Fellow, Science Policy Research Unit

University of Sussex

Brighton, United Kingdom

By using the adjective “responsible” in the title of his article, David Keith points to a dilemma: responsibility goes forward and backward. In the case of solar geoengineering, there’s the forward-looking “move by humanity to take deliberate responsibility for managing the climate,” as Keith puts it, which can be viewed most generously as the caretaking or stewardship responsibility for creating conditions in which life can flourish. But there’s also the backward-looking taking of responsibility for past actions that created the situation, the “cleaning up our mess” part, which mingles with accountability and liability. Forward-looking responsibility is entangled with agency; backward-looking responsibility is entangled with causality and blame.

Keith points toward five reasons why people are reluctant to form a solar geoengineering research program: uncertainty, slippery slope, messing with nature, governability, and moral hazard. But there’s also a sixth: the notion that solar geoengineering represents an avoidance of responsibility. As one of the people interviewed as part of my studies of perceptions of solar geoengineering put it, “It’s like transferring the responsibility from myself to somebody else in tackling climate change.” There’s a transference of agency here, as well.

Who can take that backward-looking responsibility? Scientists and researchers can’t do much about this on their own, and the intense debate about “loss and damage” in the climate regime belies the difficulty. There’s no real social process for responsibility-taking on the scale of global climate change. The best that we have is the Common but Differentiated Responsibilities and Respective Capabilities principle included within the United Nations Framework Convention on Climate Change to acknowledge the different capabilities and differing responsibilities of individual countries in addressing climate change. Fossil-fuel companies, the states that subsidized them, and the citizens of rich nations who burned the carbon and benefitted from it all deserve some share of responsibility. But instead of putting a price on carbon, the US government subsidizes it—irresponsibly.

The dilemma is that a research program itself can’t be fully responsible as an independent, self-organized entity. The context is what makes it so. Right now, the context is one of extreme irresponsibility. Research based in the United States will be “responsible” only if the state and corporations are making attempts to curb the harm, recognize past harms, change everything. So what’s a researcher to do? Best guess: listen, be responsive, align with researchers around the world, and support them in taking their research in the directions they want it to go. Recognize and name whenever possible the irresponsibilities and asymmetries, rather than speaking of a common humanity that’s created the mess and now has the responsibility of repair. Prospects of actually governing this technology, like the prospects for governance of climate change, may depend upon such recognition. It’s beyond the common purview of science to take responsibility for more than forward-looking science or its outcomes, but these are extraordinary times.

Holly Jean Buck

Department of Development Sociology

Cornell University

David Keith provides a useful provocation for thinking about the intersections of science and society in the context of solar geoengineering research. What does responsibility mean, and for whom? Keith’s notion of responsibility seems to entail more “transparent” research on solar geoengineering to enable responsible decision making. To this end, he lays out some key issues (though certainly not all) raised by the prospect of solar geoengineering research, and he suggests that they are amenable to resolution through the provision of more science. However, a different account of the relationship between science and politics opens up a set of questions that he doesn’t address. The question of the “responsibility” of a decision—or a research program—is not just a matter of scientific facts, but of values, interests, and context. This raises important questions about the relationship between science and policy, the potential distributional implications of innovation, the role of ignorance and uncertainty, and the importance of public engagement.

Keith argues that an international research program on solar geoengineering—one that is linked to, but distinct from, research on carbon dioxide removal approaches (see Jane Long’s counterpoint to this claim for separation)—is urgently needed for societies to effectively manage climate risks, especially for “the world’s most vulnerable people.” But this argument demands further scrutiny. Keith seems to argue that by virtue of his expertise he knows what matters to vulnerable people, and that solar geoengineering research will benefit them. Scientists frequently make these kinds of claims, but as the British researcher Jack Stilgoe has pointed out, the history of technology suggests that many sociotechnical systems tend to exacerbate the gap between rich and poor, rather than close it. If we want to treat this as an empirical question, we might, at the very least, develop mechanisms to ask people who are indeed vulnerable if they want solar geoengineering research to move forward on their behalf.

Keith also argues that uncertainty alone is not a sufficient reason to oppose research, because “the central purpose of research is to reduce uncertainty.” However, this view of uncertainty may miss the mark in at least two ways: it misunderstands opposition to research, and it seriously overestimates the ability of science to resolve controversies about technology and risk.

With regard to the first point, for some opponents of research, ignorance is not only an option, but the right option. There are certainly some areas of innovation that, for better or worse, societies have chosen not to pursue (for example, human cloning). An “ignorance is not an option” rationale for research could have the effect of limiting social choice in problematic ways, and it implies a level of inevitability about innovation that is not obvious. Debates over whether or not to move forward with solar geoengineering research will tend to depend on how people perceive the purposes, values, and risks of research, which is not at all a straightforward proposition answerable by more science.

On the second point, as Arizona State University professor and writer Daniel Sarewitz has argued, persistent debates about genetically modified organisms, nuclear power, and chemical toxicity evince that science often does little to limit controversies—and can sometimes make them worse. Uncertainties in these domains often resist scientific reduction, more science does not always tell us how to act wisely, and partial knowledge can create excess confidence that action is worth taking. Promises that more research in complex areas will reduce uncertainties, and that this will compel political or policy action, should be met with healthy skepticism.

Certainly, many of these concerns extend well beyond the emerging domain of solar geoengineering research, including into climate change science and politics more generally. However, this isn’t a reason to sidestep thorny questions at the heart of science policy. Experience suggests that neither Keith nor any other expert has the political privilege of determining what “responsible” approaches to solar geoengineering might be. Democratic deliberation, not expert monopoly, should lead the way in discussions of the future (or not) of research in solar geoengineering.

Jane A. Flegal

Doctoral Candidate

Environmental Science, Policy, and Management

University of California, Berkeley

Why carbon capture is not enough

The world hasn’t been very successful at dramatically reducing carbon dioxide emissions with existing technologies, so what could be wrong with a proposal to reframe climate change in order to make carbon capture a more feasible solution? In fact, investing in a broad suite of technologies to mitigate climate change is critical. But the reframing of the problem proposed by Klaus S. Lackner and Christophe Jospe in “Climate Change is a Waste Management Problem” (Issues, Spring 2017) highlights a serious misunderstanding of the reasons why stopping climate change has been so difficult.

Their main argument is that framing carbon emissions as a waste management problem akin to trash or sewage disposal, rather than as a typical pollution problem, will cut some Gordian knot. But it’s precisely because carbon dioxide is not like a typical waste problem that people have not been more motivated to find solutions.

With a waste problem such as garbage or sewage, the impact on your personal well-being and health is immediate and very tangible. If your home has trash and raw sewage piling up, you will be affected by the sight and smell very quickly, as well as face an increased risk of getting sick. But regarding carbon dioxide, we exhale it 24 hours a day, it cannot be seen or felt, and in reality it doesn’t have any immediate effect on public health or personal well-being. Even for the longer-term effects of climate change, most people won’t viscerally feel them. For example, a recent poll by Yale Climate Opinion Maps found that roughly 60% of people in the United States were concerned about global warming, but only 40% thought it would harm them personally. Moreover, we don’t know how many of those 40% would be willing to pay to prevent harm, with economic surveys suggesting that most US residents aren’t willing to pay the full social cost of carbon.

More important, paying to reduce your own personal carbon emissions doesn’t actually prevent you from bearing the effects of climate change, since it’s a global problem. If you want to pay to protect yourself from direct effects, you might buy homeowners insurance or move out of areas prone to natural disasters. But unlike with sewage treatment, you cannot pay for local climate mitigation that will clearly benefit you.

While Lackner and Jospe give some rough estimates of the cost for carbon air-capture technology and make optimistic promises that the cost will come down, they give no estimate of the cost or feasibility of storing the carbon. They do note that all storage technologies besides geologic storage are too expensive or impractical. Yet large-scale geologic storage has begun to be used at only a few sites and only in the past two to three years. Will the carbon stay underground? Is the technology safe? Is it affordable? Will the public trust it? We have no idea.

The authors repeatedly insist that a major benefit of the waste framework is that it “does not require top-down coordination and management.” But in most developed countries, all other disposal systems, such as for trash and sewage, are run entirely by the state and are affordable only because they are mandatory.

They also state that “Nobody can buy a house today without a sanctioned method for sewage handling, and household garbage must be properly disposed of.” This ignores the fact that 60% of the global population lacks access to flush toilets or proper sewage disposal. Even though the immediate benefits of sewage systems are clear, they are still unaffordable to a majority of the world’s population.

All solutions to climate change have their shortcomings. Most important, air capture of carbon dioxide doesn’t solve all of the other detrimental effects of energy production on public health and the environment, such as land use change and air and water pollution. Air capture may eventually be an affordable way to remove carbon dioxide (and maybe other greenhouse gases) from the atmosphere, but it does nothing to keep heavy metals, nitrogen and sulfur oxides, or coal ash produced during the energy-production cycle from entering air and water supplies. Developing and expanding clean energy sources such as nuclear and renewables, improving energy efficiency, and driving electric vehicles do reduce these other environmental impacts, which are in many ways much more tangible and immediate concerns to the public.

For carbon capture to work, it will need a better business model than relying on wealthy elites to voluntarily pay for their waste streams. The authors hint that there may be ways to make money from using the carbon, and that seems like a more feasible commercialization pathway for carbon air-capture technology.

We will almost certainly need carbon capture and storage as part of the solution to deep decarbonization. But as long as we’re reframing climate change, we should do so in a way that actually makes the solutions more feasible, not less.

Jessica Lovering

Director of Energy

Alex Trembath

Communications Director

The Breakthrough Institute

Oakland, California

Making big science decisions

In “Notes from a Revolution: Lessons from the Human Genome Project” (Issues, Spring 2017), David J. Galas, Aristides Patrinos, and Charles DeLisi highlight a chronic flaw in US science policy making that results in missed opportunities, inefficiencies, and in some cases wasted federal resources. The flaw is that the government has no reliable mechanism to plan and execute large scientific projects when they involve several federal agencies.

Individual executive departments and agencies have been extraordinarily successful over many decades in planning and executing large projects. I’ll mention only three. The Department of Energy (and its predecessor agencies) has built world-class instruments to study atomic nuclei and elementary particles as well as light sources for use by many fields of biological and physical sciences. The National Aeronautics and Space Administration has built and launched hundreds of instruments to study the solar system and the broader cosmos. The National Science Foundation has deployed a variety of ground-based and orbiting instruments to probe far into distant space (including the Laser Interferometer Gravitational-Wave Observatory that in 2015 made the first direct observation of gravitational waves created by colliding black holes) and launched innovative research ships to study the oceans from pole to pole and at the greatest depths.

That said, the authors are correct in calling attention to the “need for a rigorous but flexible process to evaluate large-scale transformative proposals” that significantly affect several fields and federal agencies for all the reasons the authors give. Inside the federal government, this is a job for the White House Office of Science and Technology (OSTP) and its director, who also serves as the president’s science advisor. However, it is a small agency with no authority over budget matters. Its role is strictly advisory. The National Science and Technology Council (NSTC), chaired by the president, and its coordinating committees provide an important mechanism for interagency planning. But OSTP officials and NSTC members—cabinet secretaries and heads of research agencies—move on at the end of an administration, or even sooner. What is needed is a mechanism outside the federal government that has continuity and credibility and can engage the research communities—universities, national laboratories (federal and private), and industrial labs—in assessments of needs, evaluation of options, and strategic planning for federal agencies and other partners, domestic and international. One possible model for better planning and coordination of research activities is described in an earlier Issues article by Gary E. Marchant and Wendell Wallach, “Coordinating Technology Governance” (Summer 2015).

The authors of the present article suggest that the National Academies of Sciences, Engineering, and Medicine could take this on. Their decadal reports—for example, in astronomy and astrophysics—are influential in setting priorities for whole research fields. Even though the charter of the National Academy of Sciences, under which all the academies operate, and the Executive Order creating the National Research Council restrict the activities of the Academies, they could play a coordinating role, collaborating with several science, engineering, and medical research nongovernment organizations to establish an entity of some kind to take on this difficult job. Many of the challenges to the US research enterprise, including support of high-risk transformational research and innovative university-industry-government partnerships, have been described in several reports of the American Academy of Arts and Sciences. Perhaps a study by the National Academies that focuses on new mechanisms for long-range strategic planning of large interagency activities (including facilities and programs) in cooperation with nonfederal partners could flesh out the possibilities.

Neal Lane

Senior Fellow in Science and Technology Policy

Baker Institute for Public Policy

Rice University

Former presidential science advisor and director of the National Science Foundation

Measuring research benefits

With “Are Moonshots Giant Leaps of Faith?” (Issues, Spring 2017), Walter D. Valdivia has joined the distinguished ranks of science and technology policy analysts who have written eloquent explanations of why ex post evaluation of research and development (R&D) investments is so difficult, if not impossible, at any but the highest levels of aggregation. He poses an interesting question: whether abnormally large increases in government-funded R&D program budgets, which he calls, somewhat infelicitously, “moonshots,” yield proportionately large benefits. He then details many of the reasons we are not generally able to analyze the benefits of more routine R&D budgets, never mind those that receive large injections of new money in a short time.

Though one might quibble with one or two of his claims, the overall thrust of his article is right on point. Quite naturally, citizens, politicians, and all manner of experts would like to be able to quantify the benefits that result from our huge public (and private) investments in R&D. There are good reasons for asking this question about the aggregate R&D budget as well as about various parts of it, right down to the level of the individual research project and the individual researcher.

Unfortunately, as Valdivia nicely demonstrates, we can’t provide a straightforward and fully satisfying answer to the benefits question at any level. At best, we can examine various surrogates, indicators, partial measures, and indirect hints to try to get some empirical purchase on the answer. In keeping with Valdivia’s final claim, at the end of the day there is still no substitute for informed expert judgment, with all its biases and aided by the available inadequate measures, to tell us both what got from past R&D investments and what we might get from future ones.

Christopher T. Hill

Professor of Public Policy and Technology, Emeritus

Schar School of Policy and Government

George Mason University

Walter Valdivia provides a good summary of the literature on the effects of science on society at three levels: (technological) innovation, knowledge, and research organization. His views have been well known for decades. He cites the difficulty of measuring the links between research and economic growth, the limitations of publication and citation counts, and the limited administrative capacity for making enlightened choices in promising fields.

Valdivia’s recommendations, however, do not cover the full scope of his criticisms. His discussion is essentially concerned with technological impacts, but it does not address the full array of impacts, particularly those less quantifiable, such as cultural impacts. Neither does he discuss the negative impacts of the application of science. He does suggest that the “full array of means by which knowledge production meets people’s needs” should be considered, but that is all. Valdivia calls for investments in administrative capacity, in general-purpose technologies, for specific goals, and he calls for agencies to pool their political capital for greater effect.

I think it is time to articulate the issue of science and society in totally new terms. A new paradigm and, above all, a new discourse are needed. First, we must admit that social scientists have never managed to produce the evidence necessary to demonstrate a link between science and society (although we all believe intuitively that there is such a link). Second, we (scientists and their representatives) still defend science publicly based on a decades-old discourse. Yet we have never convinced policy makers with a discourse on social and economic impacts, because “science and technology funding is more likely to be increased in response to threats of being overtaken by others (Sputnik, Japan, Germany, now China) than it is to respond to the promise of general welfare or eventual social goods,” as Caroline Wagner said on the National Science Foundation’s Science of Science Policy Listserv.

I have no ready answers as to what this new discourse should be, although training of students certainly should be a central part of it, and knowledge as a concept should be less abstract than it is now. One thing I am sure of is that the scholarly analyses and the public discourses of scientists have to make a tabula rasa of everything we have long assumed. Everyone proclaims the linear model, in which all innovation begins with basic scientific research, is dead, but in fact it is still alive and kicking. The issue is not whether the model (and its many variants under different names, such as the chain-linked model) is right or wrong, but that it is not the appropriate “marketing” tool to sell science to the public. Today, innovation has taken the place of research as a cultural value responsible for growth and welfare, and research has very few hearings in the discourse of progress. For better or worse, scientists have to take this into account.

Benoit Godin

Professor

National Institute of Scientific Research

Montreal, Quebec, Canada

Bats and human health

In “Give Bats a Break” (Issues, Spring 2017), Merlin D. Tuttle argues that limited scientific evidence supports the degree to which the media sensationalize the role of bats as hosts of significant human viral pathogens other than rabies-causing lyssaviruses. He is correct in assessing the total annual number of human deaths due to bat-borne viruses as low. And like him, I am appalled by the bad reputation that bats have received over the past decade based on limited or misinterpreted scientific data, leading to measures to destroy entire bat populations for no reason. Tuttle emphasizes his frustration with the unanswered question: why are there so few outbreaks of highly lethal diseases caused by coronaviruses or filoviruses every year given the abundance and geographic distribution of their presumed bat hosts? Indeed, my favorite phrase in his article is: “small samples have been mined for spurious correlations in support of powerful pre-existing biases [in regard to bats], while researchers ignored evidence that pointed in the opposite direction.”

However, Tuttle takes the pendulum and swings it too far into that opposite direction. He correctly cites my speculation that arthropods or fungi could be the hosts of Ebola virus. This statement, however, does not mean that I am certain that bats have to be excluded from the Ebola virus host search. Although no evidence unambiguously supports bats as harboring Ebola virus, scientific data suggest that bats may be exposed to this virus on a relatively regular basis. Thus, an arthropod or fungus on a bat may be the Ebola virus host—and to examine such a hypothesis, bats would have to be sampled.

Tuttle also minimizes the fact that Marburg and Ravn viruses, very close relatives of Ebola virus and equally if not more lethal to humans, have been isolated repeatedly from Egyptian rousettes, or Egyptian fruit bats, sampled in caves associated with human deaths. In experimental settings, these bats can be subclinically infected with Marburg viruses, and the infected bats shed the viruses orally and in their excreta for sustained periods. Further, under experimental conditions, these bats have been shown to transmit the viruses to other bats. Thus, though it’s possible that Egyptian rousettes may not be the major host of Marburg viruses, the bats certainly are a host of all known Marburg viruses and therefore their role in disease transmission ought to be studied.

Tuttle is right about MERS coronavirus being harbored in dromedary camels rather than in bats, as was hypothesized when the virus was discovered. However, he omits the accumulated scientific evidence that this virus is nested deep in a branch of bat-borne coronaviruses on the coronaviral phylogenetic tree. The question is not only from where a human contracts a virus, but also how this virus emerged. The current scientific evidence strongly points to a bat-dromedary camel transmission event in the past—and this hypothesis then brings forth the question: under which circumstances do bat viruses evolve to become human health threats? Consequently, the phrase used to introduce the article, “Searches for new viruses in bats are unlikely to contribute substantially to human health,” should not have been used.

Ultimately, the correct path lies somewhere in the middle: scientific exploration of the bat virome and the role of bats in human disease ought to be performed in the least disruptive and destructive manner possible. The incredibly important role of bats in mosquito control and plant pollination ought to be taught more effectively than in the past, and scientific sensationalism of any kind ought to be stamped out. Still, a single introduction of Ebola virus into the human population in 2013 ultimately led to more than 11,000 human deaths. Thus, if bats were involved in this unlikely, typically rare, and yet very impactful event, shouldn’t we have an eye on them?

Jens H. Kuhn

Virology Lead (Contractor)

National Institute of Allergy and Infectious Diseases Integrated Research Facility at Fort Detrick

Frederick, Maryland

It has been more than 20 years since a new war on bats has been waged. In its current form, the new outbreak is being waged primarily by scientists, but it has been picked up by decision makers and even sometimes the public, leading to a series of misunderstandings, myths, unsupported statements, and partial truths that have been interwoven to present a picture that bats are the most dangerous, filthy, pathogen-harboring organisms on earth. Few voices are rising in defense of bats, and Merlin Tuttle, speaking through his article, provides one of the most prominent, presenting real evidence against the case.

I concur with his arguments one by one. The alarmistic tone employed every time a “new” emerging disease is reported makes it sound as if that is the end of civilization—but that is very far from the truth. On the basis of conjectures and misinterpretations of inexistent evidence, bats are blamed time and again, from Ebola to SARS to MERS. By knowingly, intentionally attaching the adjective “deadly” to a virus, the alarm is raised even more. And once the alarm is raised, health officials and other government leaders start paying attention and obviously more money is thrown at the “deadly problem.”

Furthermore, the emerging infectious diseases community is knowingly and intentionally promoting this false, unfair, destructive reputation of bats. Viruses and bacteria themselves are unfairly treated. The overwhelming majority of viruses and bacteria are beneficial, and the very balance of life on earth depends on their presence and interactions with other living things. I can draw on a number of lines of research to support this case. For example, it has been learned that one milliliter of seawater contains as many as 10 million viral particles, yet no one is saying we should dry up the ocean. Similarly, one kilogram of marine sediment contains one million different viral sequences, and no one is fighting to keep humans away from the sea. Finally, the human navel has been found to contain at least 2,368 bacterial phylotypes. If we employed the same rhetoric and flawed reasoning that Tuttle points out, the consequences would be devastating for the ocean, for our lifestyles, and for our belly buttons.

So it is time to set the record straight and let bats be what they are: some of the most beneficial organisms on the planet for human and natural interests equally.

Rodrigo A. Medellin

Institute of Ecology

National Autonomous University of Mexico

Mexico City

During the past decade, concern about the role of bats on spreading diseases has increased ferociously due to the last SARS and Ebola outbreaks. I will not repeat the multiple facts that Melvin Tuttle has already provided to counter claims that are unsupported by robust empirical evidence, rising concern for the future of bats. Unfounded fear can result in excessive demands for wildlife disease management, with detrimental results such as weakened legal protection for animals and unnecessary animal deaths.

Human societies have been transforming the landscape of the planet so intensely that we are now living what we call “Global Change,” which includes a massive destruction and fragmentation of natural habitats, the elimination of numerous species, and a decline in many ecosystem services on which we rely. This new situation is now posing numerous challenges, including some threats to human health. And this is the point where bats become part of the story. Unfortunately, as Tuttle mentioned, they are continuously identified as the main virus reservoirs and described as an extraordinary threat for human health even though the evidence of their role is often open to scrutiny.

Research on this topic should be sensitive to the fact that human-bat relationships are extremely complex, involving factors ranging from the importance of ecosystem services to the myths, legends, and fears surrounding bats. This affects not only what research is performed but how it is communicated to the public.

Although further research to assess the real disease risk is advisable, greater attention must also be paid to science communication to avoid misinformed risk perception that could undermine long-term conservation efforts. Whereas fear is easy to create and difficult to eliminate, it requires time and persistence to inculcate love and respect for nature. Thus, in any publication, scientific or not, it is not enough to superficially mention some of the ecosystem services bats provide. Benefits need to be given enough attention to provide a comprehensive picture of the human-bat relationship

We should never forget the lasting consequences of our messages and how the journalists/public will interpret our words. In a world experiencing the rise of social media as the most powerful tool for science communication, it is time for scientists to make an extra effort to consider the social implications behind our discoveries. We can no longer ignore the public response.

Adrià López Baucells

PhD student in bat ecology and conservation

University of Lisbon

Portugal

Boundaries for biosecurity

In “Biosecurity Governance for the Real World” (Issues, Fall 2016), Sam Weiss Evans offers three plausible ways to correct poor assumptions that frame so-called “dual-use research of concern.” I want to focus on one of these ways: that security itself should not be considered
in isolation from the broad range of values that motivate the quest for knowledge.

Much of dual-use research of concern touches on biodefense research: research to prevent a naturally occurring or intentionally caused disease pandemic. Indeed, much of the appeal of the 2011 avian influenza studies that Evans discusses reduces to claims about the value of this research in saving lives that may be taken in the future by influenza. In saying this, advocates of such research point out, I think correctly, that security is best taken as a broad appeal to protecting value, such as the value of human life, against loss.

This suggestion is a heresy for biosecurity and biodefense. By heresy, I mean an idea that runs contrary to established doctrine. That isn’t intended as a critique of Evans—indeed, the intent is quite the contrary. The idea stands as an invitation to consider the political philosophy of science and to view security in the context of a range of other values.

The heresy emerges because the unspoken calculation that endures behind dual-use research of concern assumes that it is, on balance, worth pursuing. To echo the National Academies’ 2003 report Biotechnology Research in an Age of Terrorism, often referred to as the “Fink Report,” modern virology has given us great benefits. But as Regina Brown and I argued in “The social value of candidate HIV cures: actualism versus possibilism,” published in 2016 in the Journal of Medical Ethics, these benefits are at best incompletely realized and often poorly distributed. A large portion of the world’s poor lacks access to modern biotechnology, and the future does not promise a positive change in this disparity. Even in the United States, the significance of different threats to human health and well-being—to the security of human health against loss—are stratified between the research haves and have-nots in ways that don’t reflect the average person’s lived experience. We live in a world where Americans lose as many life years annually to suicide or migraines as they do to HIV/AIDS, yet as my research has found, these diseases differ in one key institutional driver—funding—by more than a hundredfold.

None of this is to suggest that we should abandon influenza research, which would surely cost many lives by delaying the development of vaccines and therapeutics against a deadly infectious disease. There is more to pursuing knowledge, moreover, than saving lives. But the upshot of Evans’s analysis is that we always restrict life-saving science: the unspoken calculation is always whose life we save with research.

The most recent deliberations on dual-use research of concern, conducted by the National Science Advisory Board for Biosecurity, made headway into this heresy by claiming that there are some types of research that are, in principle, not worth pursuing because the potential risks do not justify the benefits. Left undiscussed was whether the institution of science is adequately structured to promote human security. Evans calls attention to this heresy in biosecurity debates, and I sincerely hope people engage this matter thoughtfully.

Nicholas Evans

Department of Philosophy

University of Massachusetts, Lowell


Correction

The article “Seventeen Months on the Chemical Safety Board” by Beth Rosenberg in the Summer 2016 edition of Issues contained several errors. The public hearing said to have taken place in October 2014 actually took place in January 2014; and the public hearing in Richmond, California, said to have taken place in February 2013 actually took place in April 2013 (there was no February 2013 public meeting). A complete transcript of the April 2013 meeting is available on the Chemical Safety Board’s website (http://www.csb.gov/assets/1/19/0503CSB-Meeting.pdf). Also, the article misstates how National Transportation Safety Board (NTSB) leaders are selected. The president appoints members to five-year terms and chooses a chair and vice-chair to serve for two-year terms. Tradition at NTSB is that the president seeks the consent of the other board members when deciding whether to extend the terms of the chair and vice-chair. These errors have been corrected in the online version of the article. In addition, one of the editors of Issues, Daniel Sarewitz, is the brother-in-law of the author of the article. The article meets the standards for publication in Issues.

The Science Police

In 2013, Canadian ecologist Mark Vellend submitted a paper to the journal Nature that made the first peer reviewer uneasy. “I can appreciate counter-intuitive findings that are contrary to common assumption,” the comment began. But the “large policy implications” of the paper and how it might be interpreted in the media raised the bar for acceptance, the reviewer argued.

Vellend’s finding, drawn from a large meta-analysis, challenged a core tenet of conservation biology. For decades, ecologists have held that the accelerated global rate of species extinctions—known as the biodiversity crisis—filtered down to local and regional landscapes. This belief was reinforced by dozens of experimental studies that showed ecosystem function diminished when plant diversity declined. Thus a “common assumption” was baked into a larger, widely accepted conservation biology narrative: urbanization and agriculture, among other aspects of modern society, severely fragmented wild habitat, which, in turn, reduced ecological diversity and eroded ecosystem health.

And it happens to be a true storyjust not the whole story, according to the analysis Vellend and his collaborators submitted to Nature. In actuality, plant diversity at localized levels had not declined, they found. To be sure, in landscapes people had exploited (for example, for agriculture or logging), habitat became fragmented and nonnative species invaded. But there was no net loss of diversity in these remnant habitats, according to the study. Why? Because as some native species dropped out, newer ones arrived. In fact, in many places, species richness had increased.

The peer reviewer did not hide his dismay:

Unfortunately, while the authors are careful to state that they are discussing biodiversity changes at local scales, and to explain why this is relevant to the scientific community, clearly media reporting on these results are going to skim right over that and report that biological diversity is not declining if this paper were to be published in Nature. I do not think this conclusion would be justified, and I think it is important not to pave the way for that conclusion to be reached by the public.

Nature rejected the paper.

Although it was published soon after by the Proceedings of the National Academy of Sciences—without triggering media fanfare, much less public confusion—the episode unsettled Vellend, who is an ecology professor at the University of Sherbrooke, in Quebec. His uneasiness was reinforced when he presented the paper at an ecology conference and several colleagues voiced the same objections as the Nature reviewer.

Vellend discusses all this in an essay that is part of a collection titled Effective Conservation Science: Data Not Dogma, to be published by Oxford University Press in late 2017. His experiences have left him wondering if other ecology studies are being similarly judged on “how the results align with conventional wisdom or political priorities.”

The short answer appears to be yes.

In their introduction to the upcoming book, the ecologists Peter Kareiva and Michelle Marvier write: “Working as editors for some of the major journals in our field, we have seen first-hand reviewers worrying as much about the political fallout and potential misinterpretation by the public as they do about the validity and rigor of the science.”

The book tackles the philosophical and scientific issues that have divided the field of conservation biology in recent years. A major theme in the fractious debate is the underlying tension between science and advocacy, both of which are coded equally into the DNA of the field. As a 2013 article in the Chronicle of Higher Education noted, the schism is fundamentally about “a science grappling with its identity,” or as I put it in an article in the Winter 2015 Issues in Science and Technology, a “battle for the soul of conservation science.”

To a certain degree, the rift is also a power struggle. The ecologists who founded conservation biology in the 1980s have served as influential advocates for the preservation of endangered species and biodiversity. They were instrumental in elevating the issue to the top of the global environmental agenda. These well-known scientists, such as E. O. Wilson, Michael Soulé, and Stuart Pimm, have strong feelings about the best way to achieve what they believe should be a nature-centric goal. They are protective of the successful cause they launched and, unsurprisingly, dubious of new “human-friendly” approaches to conservation that Kareiva and Marvier, among others, have proposed in recent years.

If conservation science is in service to an agenda, which it is regardless of the approach, then it seems inevitable that research would at times be viewed through a political or ideological prism. The Nature reviewer’s politically minded comments provide a case in point. When I talked to Vellend about this, he shared a haunting concern. “The thing that’s worrisome to me, as a scientist, is that here’s one person [the reviewer] who actually, to their credit, wrote down exactly what they were thinking,” he said. “So how many times has someone spun their reviews a little to the negative, with those sentiments exactly in mind, without actually stating it?”

To what extent unconscious or veiled bias influences scientific peer review is impossible to know, of course. But Vellend has reason to worry about his discipline. In 2012, the editor of the field’s flagship journal, Conservation Biology, was fired after she asked some authors to remove advocacy statements they had inserted into their papers. As Vellend reminded me: “People get into our field, in part, with a politically motivated goal in mind—to protect nature and a greater number of species.” That’s totally fine, even admirable, but it also goes to the heart of the conflict roiling conservation biology: how to reconcile its purpose-driven science with its values-driven mission.

Vellend appears to have been caught in the crossfire. His paper revealed a nuanced, complex picture of biodiversity that some ecologists feared would undermine the conservation cause. In case Vellend didn’t get the message, a fellow scientist has gone even further and repeatedly harangued him by e-mail. At one point, Vellend asked the individual to desist, unless his tone became more constructive. The answer was disconcerting and a little creepy: “You better get used to it, because you’re going to be hearing a lot more from me,” the person responded by e-mail. “Consider me your personal scientific watchdog.”

In an article in the Winter 2017 Issues in Science and Technology, I reported on the different ways journalists and researchers working in the scientific arena are hounded and sometimes smeared by agenda-driven activists. A similar activity that is equally pernicious, if not much discussed publically, is the different ways scientists are aggressively policed (and also sometimes unfairly tarred) by their peers. It’s the ugly side of science, where worldviews, politics, and personalities collide.

It seems that highly charged issues, such as climate change, engender the most active policing in the scientific community and that the intensity of this policing is proportional to the perceived influence of the person on the receiving end of it. I’ve also observed another common strand: those in the scientific community who become preoccupied with the public interpretation or political implications of scientific findings tend to deputize themselves as sheriffs of scientific literature and public debate.

Although this appears to explain Vellend’s experience, he considers himself one of the lucky ones. “My story stops a few steps short of the horrors I’ve heard,” he says.

This is true. On one extreme end of the policing spectrum sit people whose reputations have been shredded. Elsewhere along this continuum are those who have been blacklisted from academic meetings, bullied on social media, and slimed in the blogosphere.

Why does it happen, and what is the impact on science?

The academic climate

Until recently, Roger Pielke Jr. spent most of his career teaching in the Environmental Studies program at the University of Colorado, Boulder. An interdisciplinary scholar, his research for over two decades was at the intersection of public policy, politics, and science—largely in the treacherous climate arena, where every utterance can be weaponized for rhetorical and political combat.

Thus, it is perhaps not surprising that Pielke has come to be defined not so much by his actual research, but by his public commentary and barbed jousting with peers and the reaction that has spawned on Internet forums, influential blogs, and elsewhere.

To the casual observer, his story is a puzzling contradiction. Pielke is among the most cited and published academics on climate change and severe weather. Yet he says he has been told by a National Science Foundation (NSF) officer: “Don’t even bother submitting an NSF proposal, because we won’t be able to find a reviewer who will give you a positive score.”

Pielke defies categorization. He believes that global warming is real and that action to curtail human emissions of greenhouse gases is justified. He is in favor of a carbon tax. At the same time, he has for many years openly feuded with climate scientists. As Science magazine noted in 2015, “Pielke has been something of a lightning rod in climate debates, sometimes drawing attacks from all sides as a result of his view on research and policy.” The controversy centers on his research finding that although the climate is warming, this does not necessarily result in the increased frequency or severity of extreme weather disasters.

If you canvass scholars in the environmental and climate policy world, a number of them will say they cross swords with Pielke, but they also respect him and teach his work. “I disagree with him about many things, but think he is someone who is worth reading and taking seriously,” says Jonathan Gilligan, an environmental sciences professor at Vanderbilt University. “I teach his book The Climate Fix every year precisely because I want my students to read someone who is smart and disagrees with me, in order to encourage them to think for themselves.”

This intellectual caliber is presumably what led the statistics whiz Nate Silver to hire Pielke in 2014 to write for FiveThirtyEight, the data journalism website that Silver created that year. Pielke’s first column questioned the strength of the evidence supporting the widely shared assertion among climate scientists that extreme weather disasters had become more prevalent in recent decades because of human-caused climate change. The uproar in the climate advocacy community was immediate and furious. Although Pielke had previously presented the same argument in the scholarly literature and in comments to science reporters, advocates were seemingly incensed that this perspective would now receive widespread public attention on Silver’s popular new website.

The Center for American Progress, a left-leaning Washington, DC-based think tank, used its influential blog, Climate Progress, to spearhead a campaign to discredit the column and Pielke’s reputation (something its lead blogger had already turned into a pet cause). The effort worked. After it became clear to Pielke that FiveThirtyEight would not let him write about climate issues anymore, he left the site within months of being hired. When news of his departure became public, the editor of the center’s blog bragged in an e-mail (disclosed in a 2016 WikiLeaks dump) to one of its wealthy donors: “I think it’s fair [to] say that without Climate Progress, Pielke would still be writing on climate change for 538.”

The episode followed on the heels of Pielke’s clash with John Holdren, then President Obama’s science advisor. Holdren had testified to Congress that on the issue of climate change and severe weather, Pielke’s interpretation of the data was “not representative of mainstream views on this topic in the climate science community.” Pielke found this offensive. He responded on his blog: “To accuse an academic of holding views that lie outside the scientific mainstream is the sort of delegitimizing talk that is of course common on blogs in the climate wars.” It is perhaps understandable why Pielke bristled at being characterized as outside the “mainstream.” His harshest critics have branded him a climate “skeptic” or “denier,” a pejorative tag that has made its way into blogs and some media outlets.

The cumulative effect of the controversies and assault on his reputation by detractors has taken a personal and professional toll. He’s become radioactive even to those sympathetic to him: “I’ve had people tell me, ‘I can’t be seen working with you, because it might hurt my career.’” Pielke mentions how one “very close colleague” said he had wanted to come to his defense on social media, then admitted: “But I don’t want them [Pielke’s critics] coming after me.”

“I get it,” Pielke says.

Unable to escape the tar flung at him in the climate world, he’s recently pivoted from climate research to sports governance, also at the University of Colorado. “Yeah, I have a new career now,” Pielke says. “I’m sitting in the athletic department. I’ve moved on.” Still, Pielke finds it difficult to let go of his old life completely. Several months ago, he testified before Congress about his climate research and the efforts to silence him. He also remains an active participant on social media, with about a quarter of his tweets climate related.

In December 2016, he penned an op-ed for the Wall Street Journal titled, “My Unhappy Life as a Climate Heretic.” In the column, Pielke said that he is on the right side of the climate-severe weather debate in terms of where the evidence lies, but that this is an “unwelcome” view because it is perceived to be undermining the climate cause. He went on to say that the “constant attack” on him over the years is a form of bullying that was intended to “drive me out of the climate change discussion.”

After Pielke’s op-ed was published, Gavin Schmidt, a climate scientist and director of the NASA Goddard Institute for Space Studies, essentially rolled his eyes on Twitter. He said that Pielke “playing the victim card” doesn’t cut it and that, in any case, “what goes around, comes around.” Schmidt’s tweet (which was part of a larger thread) suggested that Pielke’s situation did not owe to qualms about his research; it was more a Karmic reckoning.

Michael Tobis, another climate scientist who has locked horns with Pielke, posted a more judicious response on a widely read climate science blog. “Roger is a problematic figure, who is quick to criticize while being quick to take offense,” Tobis wrote. “He’s often right and often wrong, which can be a useful role in itself, but he ought to be able to take as well as he gives if he wants the net of his contribution to be constructive.”

These views by Schmidt and Tobis are echoed by others in the climate science community. To understand why Pielke has experienced such a backlash, it is necessary to rewind the story more than a decade, to a time when climate scientists were feeling as deeply and unfairly maligned as Pielke feels today.

The bad old days

In the 1990s and 2000s, as concerns about global warming increased and environmentalists made it their signature issue, climate scientists found themselves thrust into a contentious, high-stakes debate. The planetary implications of their research and the staggering policy and political challenges it presented turned climate science into an academic war zone.

The field came under fire from conservative lawmakers, dissident scientists, industry-funded think tanks, and a small but forceful army of bloggers and pundits whose motivations ranged from honest skepticism to partisan ideology. In this hostile milieu, legitimate, intellectually grounded critiques of climate research and policy were viewed with much suspicion in parts of the climate science community.

When there’s a war, people are expected to choose sides: Are you with us or against us?

Amidst this backdrop, a group of climatologists in the mid-2000s started a blog called Real Climate. (The blogosphere had then just begun to flourish as a vibrant new medium on the Internet.) The site quickly became a locus for smart and often technical commentary on various issues in climate science. It wasn’t long before the scientists managing Real Climate began taking issue with how politicians, pundits, and journalists mangled climate science.

This was an understandable impulse on their part. Climate science during this time was routinely distorted and derided by partisan, agenda-driven actors. Who better to debunk misrepresentations of the field than those who knew the science best? But Pielke cautioned the climate science community not to be drawn into rhetorical and political battles over the science. Climate scientists who did this engaged in what he termed “stealth issue advocacy,” which he contended would undermine trust in climate science. Pielke frequently made this argument on his own university blog (then called Prometheus) and expanded on the theme more generally in a 2007 book titled The Honest Broker: Making Sense of Science in Policy and Politics.

Pielke also made his point in the busy comment threads at the Real Climate blog. The scientists managing the site were highly engaged in reader conversations; there were numerous spirited, but civil, exchanges between the Real Climate scientists and Pielke in the mid-2000s. Here’s one representative comment from Pielke in November 2005, directed to Gavin Schmidt, a cofounder of Real Climate: “My objection with RC [Real Climate] is not that you guys act politically, but that you act politically but claim not to be. This mismatch is what I have argued is a factor that contributes to the politicization of science.”

In the ensuing exchange, Schmidt and other Real Climate scientists firmly pushed back against this charge, as they had done on previous occasions. They felt that Pielke was trying to elbow them out of the everyday conversation on climate change. What’s more, he was doing this at a time when there were active ideological and politically driven efforts to delegitimize climate science.

It’s important to recall this larger context, because the politics of climate change grew even uglier in the mid-to-late 2000s. This was especially the case in the United States, where conservative politicians and pundits became increasingly contemptuous of climate science, with some referring to global warming as a “hoax.” By the end of the decade, many climate scientists felt so embattled that they lumped all their critics together in one figurative box labeled enemies.

We know this because of what happened in 2009, when thousands of e-mails from climate scientists all around the world were swiped from a university server and posted on the Internet. The result was an unfiltered look into the minds of climate scientists, who by this time seem to have collectively hunched into a defensive crouch. The e-mails revealed their mounting frustration, internal scientific disagreements, and push-back strategies, all of which the media dissected and their most hostile opponents relished.

After an “exhaustive review” of the stolen e-mails, the Associated Press concluded that, among other things, climate scientists “stonewalled” Freedom of Information Act requests and “discussed hiding data,” but that none of the messages called into question the fundamental science of human-driven climate change. Additionally, the news service said that the e-mails also revealed climate scientists to be “keenly aware of how their work would be viewed and used, and just like politicians, went to great pains to shape their message.”

One particularly blatant example of this was a discussion between several climate scientists on how to keep certain research papers with which they disagreed out of a major international report on the state of climate science. They joked they would do this “even if we have to redefine what the peer review literature is!”

To Pielke, “Climategate”—as the episode was dubbed in the media—confirmed everything he’d been saying about “climate scientists hiding a political agenda in the cloth of science.” He excoriated the climate science community on his blog. He unloaded on them in his book The Climate Fix, published in 2010, which lays out his formula for energy decarbonization. He did so in damning language, broadly characterizing climate science as a “fully politicized enterprise.” He repeatedly described climate scientists as “activist scientists.”

To many climate researchers who had already endured years of venomous politically motivated attacks on their integrity, this was beyond insulting. To them, the real activists were the so-called climate “skeptics” in the blogosphere and the partisan commentators who had taken the e-mails out of context and used them as kindling to fan the toxic fires of the climate debate. Pielke, in the minds of climate scientists, was throwing gasoline on the flames.

It was a point of no return. That year, Pielke received a taste of what was to come, when during a university speaking tour for The Climate Fix he learned that some climate scientists were pressuring administrators to cancel his talks. At one such event at the University of Michigan, the professor who organized it was asked by her colleagues why she had given a venue to a “climate denier.” Some on the faculty of sciences complained to the dean. Pielke’s talk, which was about energy policy, went off without a hitch.

But the relationship between him and the climate science community grew stormier. It also got personal, as some climate scientists resolved to constrain and muddy his public profile. Respected in his field, Pielke had become a go-to expert in the media. That incensed some climate scientists and their allies; several of them lashed out at reporters privately (and sometimes publicly) and chastised editors and reporters for using Pielke as a source. One prominent long-time climate reporter started jokingly referring to Pielke as “he who shall not be named.”

When I spoke at length with Pielke for this article, he compared his experiences to a recent episode involving Bret Stephens after he joined the New York Times roster of opinion columnists in April 2017. In previous years at his perch on the Wall Street Journal op-ed page, Stephens penned numerous columns disparaging climate science in terms even more inflammatory than Pielke. Stephens also downplayed the risks posed by climate change and doubted that humans were largely responsible for it. So after he was hired by the Times, the newspaper was inundated by angry complaints. Numerous climate scientists announced on Twitter that they were cancelling their subscription in protest. An online petition circulated calling for the Times to recall the hiring of Stephens, who has since modulated his stance on climate change.

Watching this from the sidelines, Pielke saw similarities with what happened to him at FiveThirtyEight and the larger crusade to silence his voice. “This is not an argument about climate science or even climate policy,” Pielke says. “This is an argument about who gets to speak in public on these issues.”

Controlling communications

There might be something to this. Mike Hulme, a British scholar and scientist who is the head of the Department of Geography in the School of Global Affairs at Kings College in London, told me that he’s been “blackballed at some meetings, because on issues related to climate communication, I’ve been deemed not helpful.”

This is a head scratcher. Hulme’s 2009 book, Why We Disagree About Climate Change, is highly regarded as an insightful examination of the fraught cultural and sociopolitical dynamics of the climate debate. He is considered a thoughtful contributor to the field of climate communication. But he has also been critical of some social science research that became the basis for climate messaging campaigns in recent years that emphasize the authority of climate science, which he doesn’t think will advance the public debate.

This view has earned Hulme the cold shoulder from some peers, who would seemingly prefer he keep quiet. Absent that, periodic efforts have been made to freeze him out of the climate debate. The most recent attempt occurred after he was invited to participate in a conference on climate communication to be held in Austria in September 2017. Experts at the gathering will offer suggestions on “how to talk about climate change and climate protection,” according to the conference website.

Hulme recently learned that a member of the conference steering committee—a well-known academic in the field of climate communication—criticized him after his name was floated as one of the prospective panelists. In an e-mail to the steering committee, the academic wrote: “To be honest, I found Hulme’s recent work to be disappointingly ambivalent, ambiguous, and sometimes downright unhelpful. I know I’m not the only one in the climate community who thinks this. I therefore am less certain that he’ll provide the clarity our audience might expect.” The steering committee apparently disagreed. They voted to invite Hulme, so he will attend the meeting, presenting his views on climate communication, no doubt to the consternation and disapproval of some in the audience.

Hulme has observed other forms of policing that seem intended to foreclose certain lines of scientific inquiry. He points to a widely discussed and controversial paper published a few years ago by several prominent researchers who argued against climate scientists investigating the phenomena generally identified as a “pause” or “slowdown” in the rate of global warming. The authors of the paper asserted that the “pause” was a “contrarian meme” that had seeped into the climate science community.

Never mind that there were actual short-term climate variability trends that had already caught the attention of scientists. The paper implied that climate scientists were “rolling over and having their bellies tickled by these [contrarian] bloggers,” Hulme says. “That’s a soft form of policing, because it’s criticizing scientists who are doing what they are supposed to do. If there is some interesting or unanticipated curious phenomenon in the physical world, well, you should go and investigate and find out why.”

Hulme wasn’t the only one who felt this way. Numerous climate scientists, including Richard Betts, Head of Climate Impacts at the UK’s Met Office, were astonished at the suggestion in the paper that a main avenue of climate research (natural variability) should be ignored. When I revisited the controversy with Betts during a recent e-mail conversation, he said: “Even if scientific discussion of the ‘pause/hiatus/slowdown’ is (rightly or wrongly) perceived by the public and politicians as considering a ‘contrarian meme,’ should this matter? Isn’t investigating all genuine questions simply part of being credible, objective scientists?”

In an ideal world, it shouldn’t matter. But in the zero-sum world that governs the climate debate, every blog post, every op-ed, every tweet, and every study tends to be viewed through an us against them lens.

As I was writing this article, one fresh illustration of this mindset jumped out at me. Clifford Mass, a professor of atmospheric sciences at the University of Washington, recently posted an entry on his personal blog that was critical of a recent Seattle Times front-page article that attributed the death of a 72-year-old pine tree in the region to climate change. Mass methodically laid out why he believed this was incorrect. The article, he said, was another “unfortunate example” of the media “exaggerating the impacts of global warming.” (In case you’re wondering, Mass has often said that human-caused global warming is real, very serious, and should be tackled.)

Mass, like the climatologists at Real Climate, has made a hobby out of fact-checking the media. But whereas Real Climate has periodically trained its eye on science distortions occurring in the partisan political and media realm, Mass has focused on mainstream media hyperbole. This has not won him any popularity contests.

Just the opposite, it seems. Mass discussed the blowback he’s received in a “personal” note at the end of his post on the Seattle Times article. “Every time I correct misinformation in the media like this,” he wrote, “I am accused of being a denier, a skeptic, an instrument of the oil industry, and stuff I could not repeat in this family blog. Sometimes it is really hurtful.”

Mass went on to discuss other experiences that included complaints about him within the University of Washington (UW) after he’d called out various hyped stories on climate effects. “One UW professor told me that although what I was saying was true, I needed to keep quiet because I was helping the ‘skeptics.’ Probably not good for my UW career.”

When messaging and science collide

Ecologists who have been critical of traditional conservation approaches, such as the focus on large wilderness preserves or on the primacy of biodiversity, have faced similar blowback from their peers. You’re not helping, they are told.

In the introduction to Effective Conservation Science: Data Not Dogma, Kareiva and Marvier write: “In a field that frequently relies upon fear appeals to motivate action, data that run counter to doom-and-gloom messages can be especially unwelcome.”

In part, this owes to a long-standing reliance on crisis imagery and rhetoric to highlight environmental issues. In addition, as the ecologists Brian Silliman and Stephanie Wear write in their essay in the forthcoming book, “many in the conservation community fear that admitting some key principle or strategy is wrong will embolden those in opposition to conservation.” This seems to explain the negative reaction to Mark Vellend’s paradoxical study on biodiversity, which a number of his peers thought would undercut the conservation cause.

A similar impulse appears to be driving some of the policing of scientists in the climate arena. Such behavior is antithetical to the scientific enterprise, Mike Hulme, the British researcher, said to me in a follow-up e-mail exchange: “Is the purpose of science to find evidence that supports a particular advocacy campaign or a policy course or ideological position—to keep ‘on message’? Or is the point of science to investigate (imperfectly, but systematically) how the physical world works? If the latter, then wrinkles in science, conflicts and arguments, due skepticism of previously established findings—all these things are essential.”

From the Hill – Spring 2017

Trump administration outlines budget shakeup

The first Trump administration budget announcement provides a partial picture of what he has in mind for science and technology spending, and the first glimpse is not encouraging. This initial “skinny budget” calls for anticipated cuts in several applied technology programs in energy and manufacturing as well as several climate change research efforts. More surprising are the calls for significant reductions in fundamental scientific research at the National Institutes of Health and the Department of Energy, activities that have traditionally attracted bipartisan support.

There are some noteworthy exceptions, however. The National Aeronautics and Space Administration and competitive agricultural research grants fare relatively well. And on the defense side, the National Nuclear Security Administration would also see a sizable funding boost.

Quite a bit of information – indeed, most of what typically makes up the substantive budget – is still missing. For starters, the skinny budget includes little more than a handful of bullet points for most science agencies. Far more detail will come in the full budget request, expected to arrive in late April or early May. In addition, scant attention is paid to two major supporters of university research: the Department of Defense (DOD) science programs and the National Science Foundation (NSF). DOD’s science and technology activities might be expected to fare well in the end, given the administration’s focus on increasing defense spending, but that is not guaranteed. The NSF budget is not even mentioned.

The parameters of the proposed budget are defined the current discretionary spending caps, which dictate the size of annual appropriations each year and which contain nearly all defense and nondefense science and technology investments. Under current law, both defense and nondefense spending were projected to receive small reductions in FY 2018. But as the Trump administration had announced earlier, it wants to shift $54 billion (about 11%) of the nondefense budget to the defense budget. Such a move would leave nondefense spending nearly 25% in constant dollars below FY 2010 levels, while allowing defense to recover to its FY 2010 appropriation. The administration is proposing something similar for the remainder of FY 2017: a $15 billion cut to the nondefense budget, and a $25 billion increase for defense.

Thumbnail sketches of the administration’s plans for some agencies are provided below. Unless otherwise noted, comparisons are against FY 2016 funding levels, the last year for which appropriations have been completed.

The FY 2018 National Institutes of Health (NIH) budget would be reduced by 19.8% below FY 2016 levels to $25.9 billion. Notably, the budget promises “a major reorganization of NIH’s Institutes and Centers to help focus resources on the highest priority research and training activities,” including elimination of the Fogarty International Center, which specializes in international research programs, and consolidation of the Agency for Healthcare Research and Quality (AHRQ). There is no further detail.

To put these reductions in some historical context, in FY 2013, the sequestration year, the NIH budget was cut by about 5%, which resulted in about 700 fewer research project awards that year, a one-year reduction of about 8%. The success rate for grant applications dropped to 16.7%, its lowest point in at least 20 years. It’s hard to know what the much larger Trump-proposed cuts might mean because no administration has attempted NIH cuts this large since AAAS began formally monitoring the budget process in 1976.

Other health-related items include a proposed Emergency Response Fund for disease outbreaks, but there is no detail on funding or operations, and a pledge to “reform” the Centers for Disease Control and Prevention by establishing a $500-million block grant program for state public health challenges.

The budget proposes a mixed bag for the Department of Energy (DOE). As part of the overall increase in defense funding, it would boost the National Nuclear Security Administration, which takes a science-based approach to managing the nation’s nuclear weapons stockpile, by about 11%. The rest of the DOE science and technology budget would be substantially rolled back in pursuit of “increased reliance on the private sector to fund later-stage research, development, and commercialization of energy technologies.” The Advanced Research Projects Agency-Energy, which funds applied research and development aimed at high-risk energy technology challenges, would be eliminated. Applied technology programs in renewable energy, energy efficiency, fossil energy, nuclear energy, and grid-related R&D would also be cut back by an aggregate 45% in order the shift the focus to early-stage technology research.

In spite of this apparent desire to emphasize earlier stage research, the budget recommends a 17% cut to DOE’s Office of Science, which is responsible for basic research. It does not specify which programs would be most affected, but it does state that existing facilities would be maintained. Of note, last year’s Heritage Foundation budget recommendations, which have had a major influence on the Trump administration’s budget proposals, called for a significant rollback or elimination of programs in nuclear physics, advanced computing, chemistry, materials science, biological, and environmental research. As with NIH, the level of reduction of the Office of Science budget is unprecedented.

The National Aeronautics and Space Administration (NASA) fares relatively well in the budget request, a roughly 1% cut below FY 2016 levels. Public-private commercial space programs and planetary science receive strong supporting language; the latter would grow by 16.6% above FY 2016 levels, with funding included for the Mars 2020 rover and a Jupiter Europa fly-by. Unsurprisingly, the Obama administration’s favored Asteroid Redirect Mission (ARM) would be canceled. The RESTORE-L mission would be “restructured” to reduce costs, while aeronautics research would be trimmed by 2.5%, and NASA’s Office of Education, which manages Space Grant and programs aimed at minority institutions, would be eliminated.

One noteworthy surprise: NASA’s Earth Science program, a major supporter of climate research, is reduced only 6.3%–a much smaller cut than what is recommended for climate-related programs at the Environmental Protection Agency and the National Oceanic and Atmospheric Administration

Within the Department of Commerce, the Census Bureau would be funded at $1.5 billion, compared with FY 2016 level of $1.4 billion, in preparation for the 2020 Census. But the increases seem to end there. Reflecting the Heritage Foundation’s skepticism about federal technology programs, the budget eliminates the Hollings Manufacturing Extension Partnership, which seeks to boost the competitiveness of small- and medium-sized manufacturers. It’s an interesting choice given the administration’s rhetoric about the importance of reviving manufacturing. The program is located in the National Institute of Standards and Technology (NIST), which receives no other mention in the budget. However, we would not be surprised to see that the full budget includes further cuts to NIST, which was favored by the Obama administration and played a lead coordinating role in his National Network for Manufacturing Innovation.

The National Oceanic and Atmospheric Administration (NOAA) would also see its share of cuts. The budget would be cut $250 million in “targeted NOAA grants and programs supporting coastal and marine management, research, and education.” Based on an initial reading of the budget, it appears that all $250 million would apply to NOAA’s research office, which pursues an array of climate and ecosystem research, and which would be effectively cut in half (though more will be revealed with the full request). Outside the research office, NOAA’s major satellite programs would be maintained, while the Polar Follow On initiative, which seeks to fill a looming weather data gap from polar orbit, would be zeroed out in favor of reliance on commercial data.

Information on specific Department of Agriculture research programs is sparse. The Agriculture and Food Research Initiative, a competitive grants science program, would continue at its FY 2016 level of $350 million, and the department’s statistical programs would be reduced by an unspecified amount.

As part of the administration’s efforts to roll back regulation, Environmental Protection Agency’s (EPA) budget would be cut by 30% below FY 2016 levels, with its workforce reduced by 3,200. Climate research programs and several other activities would be zeroed out. The US Geological Survey would receive “more than $900 million,” compared to an FY 2016 budget of $1.1 billion.

What happens next?

The administration can propose whatever it chooses, but it’s up to Congress to make the final funding decisions. As can be expected, Democrats have roundly criticized the request, with some taking issue with specific cuts for science agencies, while many Republicans have more or less embraced the proposal in its broader strokes. For instance, Senate Budget Committee Chairman Mike Enzi (R-WY) urged elimination of “government programs that are duplicative or not delivering results.” Senate Appropriations Chair Thad Cochran (R-MS) praised the budget’s prioritization of national security. The fiscally conservative House Freedom Caucus, of which President Trump’s budget director Mick Mulvaney was a founding member, also praised the budget.

But other key Republicans have offered some objections. Senate Armed Services Committee Chairman John McCain (R-AZ) put it bluntly: “It is clear that this budget proposed today cannot pass the Senate.” House Appropriations Chairman Rodney Frelinghuysen (R-NJ) was a bit more circumspect, simply reminding constituents that “Congress has the power of the purse. While the President may offer proposals, Congress must review both requests to assure the wise investment of taxpayer dollars.” Senate Agricultural Appropriations Chair John Hoeven (R-ND) was as blunt as McCain: “The president’s proposed budget reduction for agriculture does not work.” And Senator Rob Portman (R-OH) released a statement strongly opposing President Trump’s budget request to eliminate funding for EPA’s Great Lakes Restoration Initiative, saying he would fight to preserve the program and its funding.

As a first indication of how the Trump administration plans to redirect federal spending, the skinny budget deserves attention. But it leaves unanswered critical questions about entitlement spending, tax revenue, and plans for many agencies such as the National Science Foundation. It will also have to navigate the spending caps and other strictures contained in the existing Budget Control Act, which can be changed only with the support of 60% of the Senate. The road ahead is long and unpredictable.

— Matt Hourihan and David Parkes

Matt Hourihan is the director and David Parkes is senior project coordinator of the R&D Budget and Policy Program at the American Association for the Advancement of Science.

The Little Match’s Momentous Legacy

The Pope of Physics (Web)Enrico Fermi ranks among the twentieth century’s most famous physicists. He is known to the public for creating the world’s first nuclear reactor, and to fellow scientists for experiments revealing mysteries as diverse as the structure of the atom and the behavior of cosmic rays. He also worked out a way to predict when extraterrestrial creatures might arrive on Earth, devising a calculation method still used today for other elusive questions.

Students admired—even idolized—Fermi for his clear and eloquent lectures. Colleagues revered Fermi for the methodical way he first conceived, and then tested, his many scientific insights. Fermi was rare in his field for being creative both as a theoretical and as an experimental physicist. But he remains less well known as a personality or a public figure, unlike more famous peers such as Albert Einstein, J. Robert Oppenheimer, or Edward Teller. And yet he, as much as these better known figures, made crucial discoveries in modern physics.

Born in Rome in 1901, Enrico was the youngest of three children. His parents were upwardly mobile middle-class workers: his father was a manager on the state railway and his mother was an elementary school teacher. Their funds were limited, so to permit his mother to continue teaching, Enrico was sent to live with a wet nurse on a farm for his first years. He was shy and wary as a child, and emotionally private all his life. But the boy was also spunky, nicknamed Il piccolo fiammifero (“The little match”) for his fiery temper; his friends remembered his mischievous sense of humor and inventive practical jokes.

A diligent student with a photographic memory, Fermi’s brilliance and curiosity led him to teach himself physics, beginning as a teenager when he read a 900-page text published in 1840—in Latin. Also on his own, the young Fermi studied books and journals to learn quantum mechanics and general relativity. At age 17, his mathematical talents earned him entry to the prestigious Scuola Normale Superiore, in Pisa. There he excelled, and went on to become a professor of physics in Rome. In his late twenties, Fermi studied under Max Born at the University of Gottingen in Germany, in the company of three other future Nobel laureates: Werner Heisenberg, Wolfgang Pauli, and Paul Dirac. And while studying with Paul Ehrenfest in Leiden, Holland, he met Albert Einstein, who liked and admired Fermi.

Readers familiar with earlier books about Fermi will find little that’s new in Gino Segrè and Bettina Hoerlin’s recent biography, The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age. For more details about the personal lives of Fermi and his family there is Atoms in the Family (1954), by his widow, Laura. Richer information about Fermi’s mastery of physics is in the classic Enrico Fermi, Physicist (1970) by his friend and colleague Emilio Segrè, a Nobel laureate and the uncle of Gino Segrè, the coauthor of the biography under consideration here. And for a true immersion in Fermi’s visionary scientific achievements, the University of Chicago published Enrico Fermi: Collected Papers. Also, the eclectic Fermi Remembered (2004), edited by Nobel laureate James W. Cronin, offers varied perspectives on his work from the centenary celebration of Fermi’s birth.

But what readers will find new and noteworthy in Segrè and Hoerlin’s book is the easy yet erudite way the authors bring Fermi and his work to life, in a crisp and authoritative style. The Pope of Physics brims with colorful details and instructive explanations, in a dramatic narrative that balances Fermi’s life and personality with his science. Segrè is an emeritus professor of physics and astronomy who has written about science and its practitioners. Hoerlin is a health policy administrator and college lecturer who grew up in Los Alamos, New Mexico, where the first atomic bombs were created and where her parents worked after fleeing Nazi Germany.

The authors are helpfully clear when explaining in lay terms not only the originality of Fermi’s scientific discoveries, but also how they fostered and fit into advances being made elsewhere. Fermi thrived when working with others, as he did during the Manhattan Project and earlier, at the University of Rome, with a team of physicists known as “the boys.” They called him “the Pope” because he was infallible, and “the Pope of Physics” endured as his nickname.

When holding a state professorship in Italy, Fermi became carefully apolitical as Mussolini’s fascism gripped the country. But if timid in society, he was bold in the way he studied science. In a 1932 article, Fermi applied statistical mechanics to his study of atomic physics, explaining quantum field theory in a mathematical and conceptual framework. (The Nobel laureates Hans Bethe and Richard Feynman later said this paper’s “enlightening simplicity” was pivotal for their own discoveries and careers.) With his penetrating insights and incremental experiments Fermi was able to show other physicists new ways to understand their own work. His research in Rome led Fermi to postulate, in 1934, a ß-decay theory to explain how electrons are emitted from atoms. He also demonstrated how atoms are transformed when he bombarded them with neutrons, and how slowing those neutrons increased their efficiency. This work earned Fermi the 1938 Nobel Prize in Physics.

This biography is rich with dramatic events, including Fermi’s chancy escape from fascist Italy. In the summer of 1938, Mussolini had begun an anti-Semitic campaign that endangered Fermi’s family because his wife was Jewish. That December, using the Nobel ceremony as their chance to travel abroad, the Fermis left Rome for Stockholm. Instead of returning to Italy after picking up the prize, they travelled to England before sailing to New York. There he had arranged a visiting professorship at Columbia University that soon became permanent. At Columbia, Fermi began collaborating with the Hungarian-born physicist Leo Szilard, and by July 1939 they had codesigned the world’s first nuclear reactor.

Fermi loved both the science and the social setting at Columbia. During several summers in the 1930s, he visited the California Institute of Technology and the University of Michigan, and enjoyed America’s informality and social freedoms. He was eager to become a citizen, bought a house in suburban New Jersey, and relished speaking in American slang. But once his adopted country was at war with Germany and in a race with Hitler to build the first A-bomb, Fermi’s team moved to the University of Chicago. In December 1942, at the Metallurgical Laboratory (“Met Lab”) that was part of the Manhattan Project, his team created the first self-sustaining nuclear chain reaction.

The Chicago reactor proved that chain reactions can perpetuate in uranium, and that the neutrons released could create plutonium—another and ultimately more abundant fuel for atomic bombs. The Fermis moved to the Manhattan Project’s secret nuclear weapons lab at Los Alamos in 1944. There, Enrico collaborated with some of the world’s greatest scientists, including Niels Bohr, John von Neumann, Hans Bethe, Isidor Rabi, Richard Feynman, and Edward Teller. Taking breaks from the lab’s intensive science and engineering work, the authors show, Fermi enjoyed friendships and frivolity: he mastered square dancing, loved skiing, and when hiking in the mountains sometimes sang Verdi arias or recited verses from The Divine Comedy.

Although Fermi preferred to avoid politics, he was drawn into debates about how nuclear weapons might be used and controlled. In 1945, he served on a scientific advisory panel that agreed there was “no acceptable alternative” to demonstrating the atomic bomb except by dropping it on Japanese cities. But Fermi was decisive in 1949 when, as an adviser to the US Atomic Energy Commission (AEC), he joined colleagues to oppose a crash program for a thermonuclear hydrogen bomb. Their report called it a “weapon of genocide.” Fermi and Rabi went further, writing in an appendix: “The fact that no limit exists to the destructiveness of this weapon makes its very existence and the knowledge of its construction a danger to humanity as a whole. It is necessarily an evil thing in any light.” Fermi opposed Edward Teller’s strident advocacy for the H-bomb’s development, and is remembered for his remark that Teller was the only monomaniac he knew with more than one mania.

Fermi was drawn again into politics in 1954, when he testified in support of J. Robert Oppenheimer, the wartime head of the Los Alamos Laboratory who was suspected of Communist sympathies and tried by the AEC in a secret hearing that cost him his security clearance. When Fermi was dying of stomach cancer later that year, according to one of this book’s surprising revelations, he told a former student and colleague, the physicist Richard Garwin, that in retrospect he wished he’d been more active in public affairs.

At the University of Chicago, Fermi attracted brilliant and original colleagues and students, six of whom won Nobel Prizes in Physics. Their research also flourished beyond the university at the AEC’s Argonne National Laboratory, in the Chicago suburbs, and at the nearby Fermilab, a laboratory specializing in high-energy particle physics and accelerator research.

Another of his legacies is the “Fermi problem,” or “order estimation,” which blends his theoretical and practical talents. This technique allowed him to gain an approximate answer for a situation with few empirical data points by multiplying a series of independent estimates of things he knew. This method thus breaks down a difficult and elusive problem into a longer set of easier problems. A Fermi problem’s answer is never exact, but by using multiple variables it can narrow the scope of possible assumptions and solutions.

The authors relate Fermi’s use of this method while at the Met Lab in 1944 to speculate about extraterrestrial life, and to wonder how likely it was that space invaders had already reached Earth. When Fermi concluded that aliens could be here, and wondered where they might be, Leo Szilard answered: “They are among us. They are called Hungarians.” Fermi’s method was used again in 1945 to help estimate the force of the first nuclear explosion, which he witnessed in New Mexico in July 1945. By releasing strips of paper after the blast and watching how far they fluttered, he produced an impressively close estimate to later and more accurate evidence for 20 kilotons of TNT equivalent. In 1961, the American astronomer Frank Drake continued Fermi’s method with a calculation—subsequently known as the Drake equation—that he used to identify the number of possible civilizations in the Milky Way.

In all, this new biography brims with colorful details and instructive explanations, in a dramatic narrative that balances Enrico Fermi’s life and personality with his science.

Philosopher’s Corner: Does This Pro-science Party Deserve Our Votes?

Philosophers and scientists have long struggled to get the relationship between science and politics just right. The balance of opinion suggests that the two should periodically interact for mutual benefit but otherwise keep a studied distance from each other. However, what if a political party came along that put the promotion of science and technology at the top of its agenda? How would that work? And would it be a good thing? For better or worse, this is already happening in the United States. It is the Transhumanist Party, fronted by Zoltan Istvan, which consistently polled fifth during the 2016 presidential campaign.

If you find immortality desirable, yet you do nothing about it in this life by promoting the relevant science and technology, then if God doesn’t exist, you most certainly won’t become immortal.

Istvan is not himself a scientist. In fact, he holds a degree in philosophy and religion from Columbia University. And he has never held elective office or established any sort of political track record. He is best known as the author of the award-winning science fiction novel The Transhumanist Wager. The novel provides a sense of Istvan’s motivation for starting a science-based party. The Transhumanist Wager is a bit like Pascal’s Wager, which is a philosophical argument for believing in God on the grounds that if you don’t and the deity happens to exist, then you might be condemned to eternal damnation, whereas if you do believe and turn out to be wrong, you likely won’t have lost much, if anything at all. The argument is meant to persuade by highlighting the unacceptable level of existential risk assumed by atheists.

Istvan makes a similar pitch, but now aimed at secularists who nevertheless hanker for what religion has traditionally promised. He argues that if you find immortality desirable, yet you do nothing about it in this life by promoting the relevant science and technology, then if God doesn’t exist, you most certainly won’t become immortal.

Istvan drove home the point during the campaign by driving a coffin-shaped bus across the United States, culminating in the Martin Luther-like gesture of presenting a Transhumanist Bill of Rights on the doorstep of the US Capitol in Washington, DC. Along the way, he made many creditable radio and television appearances and wrote a number of “on the road” articles for online publications, including the Huffington Post. Istvan even caught the eye of the leading third-party presidential candidate, the Libertarian Gary Johnson, who considered making him his running mate.

What Istvan offered voters was a clear vision of how science and technology could deliver a heaven on earth for everyone. The Transhumanist Bill of Rights envisages that it is within the power of science and technology to deliver the end to all significant suffering, the enhancement of one’s existing capacities, and the indefinite extension of one’s life. To the fans whom Istvan attracted during his campaign, these added up to “liberty makers.” For them, the question was what prevented the federal government from prioritizing what Istvan had presented as well within human reach.

Perhaps predictably, Istvan tended to downplay the risks associated with many of the proposed treatments and techniques that would deliver the promised goods. Instead, he appealed to the libertarian streak in most of his followers, arguing that the government spends too much on the military. This serves only to increase the level of risk to which people in the United States are exposed, which in turn provides a pretext for the curtailment of their liberty.

Istvan’s Transhumanist Party is, in equal measures, philosophically interesting and problematic. Interestingly, his candidacy was not endorsed by any major scientist or scientific group. But none condemned it, either. This studied silence suggests that the scientific community may be ill-equipped to deal with someone who perhaps takes the promises of science more literally than do scientists themselves. After all, scientific research is funded not out of the pockets solely of scientists, but from the pockets of millions of taxpayers who have been led to believe in science. If taxpayers had an accurate understanding of, say, the rather limited efficacy of funded medical research vis-à-vis health problems, they might think twice. However, taxpayers buy into the faith that in the long run medical research will cure all of their ailments, or at least the ailments of their descendants. That’s already a pretty big leap of faith, and Istvan artfully capitalizes on it to issue his more extravagant claims for immortality.

More problematic is Istvan’s dismissal of the military as a drag on the funds that could be channeled into research promoting human immortality. Underlying the quest for immortality has been the fear of vulnerability. It is not by accident that the modern welfare state was invented by the Prussian statesman Otto von Bismarck. He was repulsed by the idea that ordinary people would be called on to defend their country in times of war, yet couldn’t defend themselves against disease or want in times of peace. The prospect of civilian-targeted warfare in today’s world has brought the realms of national and personal security even closer together. This is exemplified by the image of the virus, which may refer to something either in silicon or carbon, but in both cases may be lethal to the body politic.

Istvan fails to appreciate that by making immortality, which amounts to invulnerability, an explicit goal of public policy, he is courting military-style perspectives on the conditions under which it might be both achieved and undermined. Indeed, recent mission statements from the Defense Advanced Research Projects Agency, which stress the need to synchronize the workings of human biology and technology, could easily provide an intellectual backdrop for the Transhumanist Bill of Rights.

What would happen if one day Istvan, or some upgraded version of him, were swept into the Oval Office and the Transhumanist Party agenda could be implemented? A harmonic convergence of politics and science would presumably enable everyone to live forever. The key philosophical questions would turn on those who refuse the offer of immortality. Would they be allowed to die? If so, when, on what grounds and by what means? And how would these remnants of Humanity 1.0 integrate into a society where people would be routinely encouraged to embrace immortality, if not be forced outright to undergo relevant treatments?

Istvan himself has remained silent on these questions, perhaps assuming that everyone would find immortality desirable. Yet some transhumanist thinkers have already envisaged a speciation process resulting in a Humanity 1.0 and a Humanity 2.0. In the more humane scenarios, the latter would create sanctuaries so the former can continue to flourish for their abnormally shortened lives. This would be in keeping with today’s moral thinking about the treatment of the great apes from which Homo sapiens originally divided. But of course, that original understanding was long in the making, and throughout most of history much direct and indirect violence was inflicted on the apes because they were seen as subhuman. Perhaps the take-home message here is that those inclined to support a pro-science political party should be careful what they wish for.

Steve Fuller holds the Auguste Comte Chair in Social Epistemology in the Department of Sociology at the University of Warwick in the United Kingdom.

Climate Change is a Waste Management Problem

The physical problem underlying climate change is very simple: dumping carbon dioxide and other greenhouses gases into the air raises their concentrations in the atmosphere and causes gradual warming. In the several decades since climate change has been an important international political issue, the necessary solution to this simple problem has been viewed as equally simple: the world must radically reduce its emissions of carbon-carrying gases.

Here we explore a different perspective, and a different type of solution. Carbon dioxide is a waste product; dumping it into the open air is a form of littering. Dumping can be avoided or cleaned up with technological fixes to our current infrastructure. These fixes do not require drastic reductions in energy use, changes in lifestyle, or transformations in energy technologies. Keeping carbon dioxide out of the atmosphere is a waste management problem. The rapid mixing of carbon dioxide in the atmosphere simplifies this waste management problem compared with others, such as sewage or municipal garbage, where local buildup of waste is deleterious and therefore requires the disposal of the specific waste material as it is generated. By contrast, carbon dioxide does not create local damage, and it does not matter where carbon dioxide molecules are removed from the atmosphere as long as the amount removed equals the amount added.

Waste management was introduced for other effluents because uncontrolled dumping caused serious and irreparable harm. For example, the introduction of sewer systems in European cities in the nineteenth century was driven by the recognition that cholera and typhoid were caused by water contamination. Introducing sewer systems had to overcome arguments that they were too expensive and that the causal relationship between waste and disease was not fully understood. As cause and effect became clear, sewer systems were built.

Nobody can buy a house today without a sanctioned method for sewage handling, and household garbage must be properly disposed of. Residents typically pay a fee to their local government to cover the costs of sewage removal and treatment. In many locations, private companies collect household garbage. Their successful business models rely on the fact that simply dumping garbage on the street is societally unacceptable, recognized as deleterious to health and well-being, and therefore illegal.

Even when the consequences of ignoring waste streams are not as drastic as with sewage, a majority of people may still agree on the societal value of cleaning up. For example, in modern societies, littering along highways is unacceptable. The consensus is visible in the fines established for littering.

From a waste management perspective, carbon dioxide emissions represent the metabolic by-product of industrial activities on which billions of people depend to survive and thrive. Now we must learn to safely dispose of this by-product.

For global climate change, a change in primary focus from emissions reduction and resource conservation to waste disposal changes the approach to the carbon problem. Current policies tend to encourage and reward reductions in carbon dioxide emissions. If the world were to consider carbon dioxide like sewage, this would not be the case. Rewarding people for going to the bathroom less would be nonsensical. Low-flow toilets would certainly be encouraged, but the reduced flow must still be properly channeled into a sewage system. Similarly, the alternative to littering is to properly dispose of (or recycle) trash, not to expect that people let trash accumulate in their cars. As a policy response, parking lots at scenic overlooks feature garbage bins.

The focus on reducing emissions to address climate change has typically included with it a moral judgment against those who emit. Such a moral stance makes virtually everyone a sinner, and makes hypocrites out of many who are concerned about climate change but still partake in the benefits of modernity. A waste management perspective makes it unnecessary to demonize or outlaw activities that create waste streams. It’s okay for people to use toilets and generate garbage; society in turn provides appropriate means of waste disposal to protect the common good. From a waste management perspective, carbon dioxide emissions represent the metabolic by-product of industrial activities on which billions of people depend to survive and thrive. Now we must learn to safely dispose of this by-product.

Another key element of a carbon dioxide waste management approach is that it does not demand a global transformation of existing energy infrastructures and technologies. Waste management demands only the construction of a parallel infrastructure to collect the carbon dioxide and dispose of it safely and permanently. The waste management perspective therefore does not threaten the political, social, and economic interests associated with the fossil energy system—and does not automatically trigger opposition from those interests.

Nor does a waste management orientation require the type of large-scale, coordinated effort that has dominated climate change policy initiatives to date. Because energy systems and transport systems are highly integrated and coordinated, efforts to reduce emissions must be integrated and coordinated as well. For example, adding renewable energy capacity to an electricity grid will not necessarily reduce emissions if the back-up power system necessary to balance intermittencies is still fossil-based. A waste management approach does not demand large-scale coordination; it requires only that individuals and companies start finding ways to dispose of or recycle carbon.

But is it real?

Are there affordable technologies for implementing carbon waste management? Would companies recognize a business model around carbon waste management? And can consumers be convinced that carbon waste management is necessary and that claims of carbon disposal can be trusted?

The centerpieces of carbon waste managements are technologies for carbon dioxide capture and disposal. Such technologies already exist. Disposal is often referred to euphemistically as carbon storage. Carbon can be stored in many ways. It can be tied up in mineral carbonates or biomass; it can be injected underground; it can be stored in waste-disposal sites or bound up in materials that are used in the built infrastructure. Geological storage, the injection of carbon dioxide into underground reservoirs, has been demonstrated, is known to be affordable, and is virtually permanent. Geological surveys indicate that the storage capacity in diverse localities is sufficient for the large-scale introduction of carbon disposal. Most options other than geological storage are not yet well developed. They vary in cost, scalability, and permanence. Biomass options often fall short on storage capacity and permanence. Mineral sequestration is often too expensive, and substantial storage in the built infrastructure would require big changes in its design.

The most expensive part of managing carbon waste, however, is the capture of carbon dioxide. Most capture technologies have been developed for point sources, such as coal-fired power plants, but such capture cannot address emissions from distributed sources, such as cars or homes. This leaves behind roughly half of all emissions. Distributed emissions require technologies that can take carbon back from the environment, specifically from the air.

Capture of carbon dioxide from air is technically feasible. Until recently, much of the scientific focus has been on biological methods that use photosynthetic organisms to pull carbon dioxide out of the atmosphere. The biomass accumulated during the removal process would then be burned, and the resulting biochar would be stored along with any residual carbon dioxide produced during the combustion process. Biomass capture is certainly feasible and very often affordable. Unfortunately, growing enough biomass to affect the world’s carbon balance would require vast amounts of agricultural land. Biological processes are simply not carbon-intensive enough to balance out industrial carbon emissions, but they can help start the process.

Chemical engineering approaches focused on capturing carbon dioxide directly from the air and then disposing of it by various means will make it possible to stop littering the air with carbon dioxide. Direct air capture (DAC) has been demonstrated in the laboratory and by several small start-up companies in small pilot plants. Collectors absorb carbon dioxide from air on filter surfaces, much like leaves on a tree. Several DAC methods have been proposed. In our own design, collectors stand passively in the wind like trees. Such synthetic trees are one thousand times faster in collecting carbon dioxide from the air than natural trees of similar size. The wind blows over the leaves of the synthetic trees and carbon dioxide sticks to them. Once loaded with carbon dioxide, the leaves need to be regenerated; the carbon dioxide that has been stripped off then needs to be processed further. Regeneration may involve heating the sorbent or exposing it to a vacuum.

Through our own research we discovered a sorbent that absorbs carbon dioxide when dry and releases it when exposed to moisture. Our leaves absorb carbon dioxide in the dry wind, and then release the carbon dioxide when wetted in a closed chamber. The raw product stream then needs to be cleaned, dried, and compressed. In our version, the initial product is a gas stream that contains one hundred times more carbon dioxide than in ambient air. If the regeneration chamber where we strip off the gas is evacuated prior to wetting, the carbon dioxide product is quite pure. If the chamber is filled with air when regeneration starts, then we produce a stream of carbon dioxide-enriched air. Further processing will depend on what is to be done with the carbon dioxide. Although some storage technologies can handle our carbon dioxide with little or no additional processing, if it is to be stored in geologic features, the carbon dioxide must be converted into a concentrated form under higher pressure. Technologies to upgrade the purity of carbon dioxide are already commonly used during flue gas scrubbing in what are called carbon capture and sequestration operations, and they are also used in various other commercial applications ranging from the production of carbonated beverages to the filling of fire extinguishers.

DAC and direct air capture with carbon storage (DACCS) can reach the scale of current carbon dioxide emissions without excessive land use and without the environmental impact of biomass growth. A collector the size of a trailer truck could pull a ton of carbon dioxide per day out of the air. Thousands of mass-produced units could be aggregated into air capture farms collecting a few million tons of carbon dioxide per year on a square mile of land, before the amount of air passing over the land limits carbon dioxide collection. Moving from a single tree farm to the global scale, a hundred million collector units would keep up with current world emissions. Befitting the size of the problem, this scale is huge but in no way unimaginable for a complex yet essential industrial product; the number of cars and trucks on the road globally amounts to about a billion. Moreover, our initial estimates suggest that a synthetic tree farm would be much more compact, perhaps hundreds of times more so, than a wind farm that would prevent an equivalent amount of carbon dioxide emissions.

With these technologies, a picture of a possible carbon-neutral future emerges. Companies, communities, and environmentally conscious individuals are already looking for ways to reduce their carbon footprints. Forests of DAC trees could be installed in remote locations where carbon disposal problems will be minimal. Devices could also be installed near industrial sites that use carbon dioxide as a raw material, such as in the production of synthetic fuels, thus eliminating costs of shipping liquid carbon dioxide for commercial applications. As the market for disposal grows, more such units could be deployed. In such scenarios, the cost of closing the carbon cycle will define the carbon price. In cases where it is cheaper to capture carbon at the source (such as a coal- or gas-fired power plant) or eliminate the use of fossil carbon, the markets will move in this direction. Wherever biomass capture turns out to be cheaper, it will also be incentivized. At the very least, DAC can take back emissions that are difficult to avoid, such as from aircraft, heavy trucks, and ships. DACCS even makes it possible to collect and dispose of carbon dioxide that has been emitted in the past; indeed, it may be the only feasible option for removing the old waste that still litters the atmosphere.

But is it affordable?

Managing waste is never free. As a cost of good governance, we pay for sewage removal and treatment, for garbage collection, and for the production of clean water—and we make these payments willingly because we recognize both the public good that results and the consequences that would ensue if we did not deal with these matters. But we also make them willingly because they are not overly burdensome. What cost will be tolerated for carbon disposal may ultimately depend on a shared understanding of the pain that climate change will inflict. But even amidst continuing disagreement about the seriousness of the climate risk, some people will be open to paying some level of clean-up costs simply because they dislike the mess, just as many individuals were willing to voluntarily recycle their trash even before policies were put in place to incentivize recycling.

DACCS is likely to set the upper limit on the cost of carbon waste management. Since it can deal with any emission, it would displace more expensive technologies but would not stand in the way of cheaper methods where they are applicable. In the energy sector, the cost of carbon management must not dominate or even come close to the cost of using energy. This threshold is likely less than $100 per ton of carbon dioxide, which would add 85 cents to a gallon of gasoline. The American Physical Society in 2011 analyzed an early approach to air capture relying on off-the-shelf technology, and pegged the cost at $600 per ton of carbon dioxide. This would raise the cost of a gallon of gasoline by roughly $5. Newer technologies have greatly reduced this number, in some cases to below $100 per ton.

Although these numbers can be verified only through public demonstrations or in a market environment, production costs of most technologies go down as more is learned about how to produce and use them, and costs will likely go down for direct air capture as well. Energy and water, the raw materials for our DAC design, set a cost floor for the technology of $10 to $20 per ton. Other back-of-the-envelope engineering estimates we have made (for example, based on the weight of the collectors) point toward the possibility of similar numbers. Cost reductions come with experience and cumulative production, and just as we’ve seen such reductions for renewable energy technologies, we should expect to see them for DAC.

Direct air capture is still in its infancy, but has been proven in the laboratory and on small pilot scales. Critics initially claimed thermodynamic constraints would prevent DAC from ever being affordable. When the low thermodynamic energy requirements of DAC were demonstrated, critics next borrowed economic lessons of producing metal from low-quality ore to claim that the costs of extracting carbon dioxide from air would be prohibitive. But DAC economics is dominated by the cost of sorbent regeneration, not gas extraction, and while those costs are still high, they have the potential to come down dramatically.

But the necessary learning-by-doing reductions in production costs will not happen without doing. Unless the technology is supported and promoted, as renewable energy has been promoted in the past, it cannot reach affordable costs. A shift to a waste paradigm provides the policy rationale for such promotion by articulating carbon dioxide disposal as a public good, like sewage disposal or even national defense and public health. And just as government has supported technologies (for example, aircraft carriers and vaccines) to advance other public goods, it could use waste reduction as the public-goods focal point for developing the necessary carbon disposal technologies. Government support can create technical options and buy down the costs of these novel technologies until they become so affordable that their wide application is acceptable to people who are willing to pay only for litter removal. Alternatively, or in parallel fashion, philanthropists could fund demonstrations for proof of concept at scale and thereby advance social acceptability and stimulate voluntary efforts.

Transitioning to a carbon-neutral world

Costs are important, but equally important is trust in the waste management service. The process of carbon disposal or carbon recycling must be transparent and simple. Future service providers could either be trusted institutions or be audited by trusted institutions. They could be public entities, such as state or local governments, or even large corporations whose reputations would be severely damaged if they cheat (as in the case of Volkswagen).

Such trust can be established. For example, people usually do not question that gasoline pumps dispense different fuels for different octane ratings. High-end coffee shops can charge a premium for fair trade coffee. An important part of a waste management approach to excess carbon would be a transparent, generally accepted auditing methodology that results in certificates of negative emissions for the disposal of carbon. Certificates would be issued whenever carbon is stored; they would have to be relinquished when carbon is lost or purposefully mobilized. Many different methods of storage could be certified and contribute to the reduction of excess carbon in the mobile carbon pool. Consumers could simply purchase certificates that match their emissions. These certificates would offer a much more direct and satisfying alternative for individual action than carbon offsets, where individuals produce emissions but pay for others not to emit. In the waste management paradigm, you simply pay to remove your own emissions from the atmosphere, just as you pay to have your sewage processed. It would look odd to pay for someone else’s sewage treatment, while dumping one’s own into the river.

An important part of a waste management approach to excess carbon would be a transparent, generally accepted auditing methodology that results in certificates of negative emissions for the disposal of carbon.

Carbon disposal offers many different models. For example, a city could run its own carbon disposal site, or an oil company could offer carbon-neutral fuels at the pump. Oil companies should have a particular incentive to market carbon-neutral fuels. As electric vehicles and renewable technologies grab more clean-energy market share, oil companies’ entire business model will fall apart if carbon dioxide cannot be recovered from the atmosphere and environmental carbon constraints become more severe. As a result, the industry should be motivated to push carbon removal technologies down the cost-reduction learning curve as soon as possible. Car companies could offer cars that are branded as carbon-neutral and include in their purchase price a pre-emptive carbon disposal of the expected lifetime emissions, which would be about 100 tons of carbon dioxide. Or imagine a button at the gasoline pump where individuals can choose to pay to have the 20 pounds of carbon dioxide that are released from a gallon of gasoline recovered and properly disposed of. If 1% of all fuel buyers in the United States could be convinced to buy back their carbon, this would build a disposal business of 12 million tons per year, dwarfing all other attempts at carbon capture and storage and exceeding the market for merchant carbon dioxide. This would create business opportunities and with it many new models for financing carbon waste management.

Waste can often be recycled, and the biggest opportunity for carbon recycling lies in the production of synthetic fuels from carbon dioxide, water, and renewable energy. Ramping up and down the production of such fuels could be used to balance the intermittencies created by the large-scale move to renewable energy for electricity grids. Carbon mined from the atmosphere could also produce materials for the human-built environment. Examples include plastics and high-strength carbon compounds, as well as carbonate-based cements. Using such materials in the built infrastructure would effectively store carbon for the lifetime of the structure, and thus has the potential to tie up some fraction of the world’s excess carbon that has been already produced. To get a sense of the scale of this potential, consider that in the United States concrete in the infrastructure amounts to maybe 90 tons per person. We have calculated that if the infrastructure relying on recycled carbon reduced the concentration of carbon dioxide in the atmosphere by 100 parts per million, then a future world population of 10 billion people would have to tie up 40 tons of carbon per person. Such back-of-the envelope estimates are simply meant to show that there are many possible options that can be mobilized to advance a waste management paradigm.

Direct air capture will not be a silver bullet that all by itself stops climate change, but it has many assets that can directly address some of the key obstacles to technical, political, and economic progress on climate change. It can scale up nearly without limit, and thus can provide a backstop technology that if necessary could balance the carbon cycle, assuring that whatever goes into the atmosphere also comes out, no matter how difficult it is to reduce emissions from particular technologies or sectors, such as transportation. Direct air capture with carbon storage can also, if necessary, lower the carbon dioxide concentration in the atmosphere much faster than natural processes would. Without negative emissions, the warming impact of carbon dioxide will linger for the next millennium.

Direct air capture will not be a silver bullet that all by itself stops climate change, but it has many assets that can directly address some of the key obstacles to technical, political, and economic progress on climate change.

The waste management paradigm can be adopted without waiting for the energy system to transform. Adoption of DAC technologies does not depend on phasing out or out-competing incumbent energy technologies, and thus adoption is not held hostage to those who create the carbon problem and see no immediate gain from solving it. Nor must DAC replace existing energy infrastructure or social and cultural arrangements that depend on that infrastructure. When technologies provide functions and services not previously available, they can scale up rapidly. The introduction of cars, jet airplanes, and computers worldwide, or the introduction and speedy adoption of nuclear energy in France, show that new technologies can conquer markets in a couple of decades. New businesses can take on the task of carbon disposal, and so can existing ones that see new opportunities in the waste disposal business, even if they are not producing carbon dioxide.

A waste management approach introduces a disposal cost for carbon. As a result, it would often be more cost-effective to avoid emissions entirely. Efficiency, conservation, and carbon recycling will therefore be incentivized. Point-source carbon capture at power plants with associated disposal will often be more economic than air capture. But DACCS will make it possible to regulate all emissions no matter where they originate. Perhaps most important, the waste management approach, unlike efforts to reduce emissions by managing large-scale energy systems, does not require top-down coordination and management. Various government agencies and private companies have spent billions of dollars on new energy technologies aimed at reducing emissions, but, for many complex reasons, emissions continue to climb. For air capture of carbon dioxide, the story will be different. Each independent effort to capture and dispose of waste will always move us, however incrementally, in the right direction. That, in the end, is the power of the waste management paradigm.

Notes from a Revolution: Lessons from the Human Genome Project

A critical scientific effort that almost didn’t happen illustrates the need for a rigorous but flexible process to evaluate large-scale transformative research proposals.

There are very few scientific endeavors that can be recognized almost immediately as seminal moments in the progress of human knowledge. The Human Genome Project (HGP) is one of them. There is no question now that the information locked in the DNA of all of us is both wonderfully rich in content and critically important to understanding biology and medicine.

The HGP, and the genomic revolution that it started, has become so much a part of biology that its history is often taken for granted. We have been surprised that many biologists and medical researchers are unaware that the initial proposal to sequence the human genome was fraught with controversy, that there was no clear consensus in the scientific community about whether it was worth pursuing. Opponents argued that funding the HGP would severely restrict investigator-initiated research projects, that we lacked the technology to complete the project in a reasonable amount of time, that the biological sciences would become increasingly politicized, and that even if it could be completed, most of the information would be useless. It was also argued that funding would be squandered on “big science,” and in the process the National Institutes of Health (NIH) and other science funding agencies would lose their focus and scientific effectiveness. All these objections might have seemed reasonable, considering the unknown future of science and technology as well as the tight research budgets, but in retrospect we can see that they were seriously misguided. A careful review of the origins of the opposition and the nature of their objections reveals the complexity of the science policy process, particularly for novel, large-scale projects.

When the project was first proposed in the mid-1980s, one prominent skeptic was Nobel laureate David Baltimore, who estimated that it would take 100 years to sequence the human genome, and many agreed. The 100-year estimate would, in fact, have been about right if the technology had remained frozen at the level of 1985. But the technologies used in sequencing the genome improved rapidly, and the century of work was compressed into 15 years. Another eminent geneticist, David Botstein, warned, less seriously, at a national meeting in 1986 against becoming involved in the “mindless big science of sequencing genomes,” to wide and warm agreement from the audience. To their credit, Baltimore and Botstein both quickly recognized the value of the project and were active in garnering support. They are examples of how good interactions and scientific debate in the community informed and built the case for the HGP even in the face of broad opposition. Others, however, remained irreversibly opposed.

Even as late as 1990, after more than four years of debate, and after the project had been formally proposed and received seed funding, the controversy continued to fester. Anxiety about the possible erosion of funding for individual research grants at NIH gave birth to a movement among university researchers opposed to the HGP. Martin Rechsteiner of the biochemistry department at the University of Utah sent a “dear colleague” letter to researchers across the United States urging a protest against the HGP as “a waste of national resources.” He was joined by many others, including Harvard University’s Bernard Davis, a well-known microbiologist. Rechsteiner and Davis testified against the project at a Senate hearing in July 1990 organized by Senator Pete Domenici of New Mexico and chaired by Senator Wendell Ford of Kentucky.

Davis argued that an organized project was unnecessary because the human genome would be mapped and sequenced by individual researchers working as they always had. The issue of taking money away from individual research grants at NIH was front and center, but Rechsteiner also expressed some disdain for what could be learned from the sequence of the genome. The hearing generated several letters and petitions from university departments opposed to the genome project that were included in the Congressional Record.

Nor was there a shortage of fierce objections to the project on scientific grounds. For example, Robert Weinberg, an eminent cancer researcher at the Massachusetts Institute of Technology, was one of a number of scientists who argued that the project made no sense since so little of the genome was used to code for proteins, and the data would likely reveal very little for the resources expended. An indication of the extreme end of this fringe came when Martin Rechsteiner told a New York Times reporter, “The human genome project is bad science, it’s unthought-out science, it’s hyped science.”

Had any of these arguments and attitudes prevailed—and they could have—they would have led us badly astray. Instead, we now have a rich resource whose scientific, medical, and economic impact has been transformative.

The project’s history has taught us useful lessons about the research process and about best practices for managing large, complex ventures and biomedical consortia. Some of the lessons were recently discussed by NIH Director Francis Collins and colleagues in an article in Nature. They outlined and briefly discussed six lessons, which emphasize the importance of partnerships, free data sharing, data analysis, technology development, and the ethical and moral issues that accompany all transformative technologies.

To these we can add two more: be flexible—because unanticipated difficulties occur in almost any complex project, and the organizational and financial structure must allow for these—and encourage multiagency participation. The success of the HGP was in no small measure the result of cooperation among a number of agencies, and especially between the Department of Energy (DOE), the birthplace of the project, and NIH, whose mission encompassed potential health benefits of the project. NIH provided an essential effort to address the medical implications, and DOE provided an equally essential piece that addressed the development of key technologies. The multiagency, international project quickly gained the attention of a diverse set of organizations and individuals and as such provided a context that, in retrospect, was unusually complex and nuanced.

Our focus here will not be on the scientific, medical, and economic importance of the HGP, since that is now widely accepted. Nor can we offer fixed prescriptions about how to proceed successfully into an unknown future. Instead, we reflect on science policy lessons that can be learned from the way the project was initiated, unfolded, and ultimately reached a successful conclusion.

We can’t help being struck by the fact that the genome project emerged suddenly from a background murmur of ideas and discussions, brought to life by DOE, a federal agency that was widely thought to be peripheral to biomedical science. This fact was in no small measure the source of considerable controversy and confusion. If large, complex enterprises become increasingly common in the future—as we might reasonably expect in this era of dramatic advances in knowledge, transformative technologies, and big data—and burst on the scene with little prior warning from an unexpected source, as the HGP did, it would seem wise to consider ideas and processes that might leave us better prepared than we were in 1985.

More specifically, many of our suggestions underline the importance of openness to new ideas originating from unexpected sources and the development of guidelines for considering large, transformative ventures that cut across multiple scientific disciplines and organizations. We briefly discuss the development of effective processes for public-private partnering and ways to accelerate transformative, inter-organizational projects. Some of our observations and ideas are now more or less accepted and have helped the nation find pathways to better science policy, but others have not yet penetrated our collective consciousness or operational policies.

The perspectives provided by the history of the HGP are brought into better focus when we recall some of the key cultural characteristics of biological research in the mid-1980s. First, there was relatively little discussion and interaction among agencies, even when their mission boundaries were somewhat blurred. Second, the culture of biology valued, almost exclusively, the small science of individual investigators. The HGP was viewed by many as an embodiment of wanton, brute-force science, light on knowledge seeking, devoid of hypothesis, and with no assurance of the biological significance of the eventual results. Third, application of modern technology and interdisciplinary effort had not yet become part of the general culture in biology. There were no large, complex, multiyear scientific projects in biomedical science that required contributions from multiple disciplines. Mathematics and computation, for example, were still largely foreign to the biomedical community, with the exception of statistical services for epidemiology, clinical trials, and some specialized areas, and much of technology development and engineering also stood well apart, though there were a few important exceptions.

Galas timeline

Lessons learned

The history of the HGP can usefully inform our approach to a wide swath of future science policy processes and help us avoid decisions that could lead to lost opportunities for the nation and the world, just as we nearly lost the opportunity to launch the HGP. The most important lessons include the following:

Remain open to new ideas, particularly those that emerge from unexpected sources. When a massive, decade-long, interdisciplinary project directly relevant to health sciences was proposed by DOE, the major funder of physics, chemistry, and engineering, the biomedical community was naturally surprised and somewhat skeptical. The National Institutes of Health, after all, was the world’s dominant supporter of biomedical science, with a long track record of major discoveries. But whereas NIH valued and focused almost entirely on the small science of individual investigators, DOE had decades of experience managing large, complex, collaborative projects that often required capital-intensive resources, including some related to health and the environment.

As DOE capitalized on its expertise in advanced computation and instrumentation to move increasingly into modern biology, its culture became fertile ground for the growth of a project such as the HGP. Of particular relevance is the fact that the only DNA sequence database had been established at the Los Alamos National Laboratory a decade prior to the start of the HGP.

Meanwhile, NIH was also beginning to accommodate proposals that were culturally aligned to what would become the substance of the HGP. Notably, it started to increase its support for biomedical programs that required intensive computation and large, biomedically inspired resource centers. These included new forms of mass spectrometry, the support of synchrotron X-ray stations at synchrotron particle accelerators for protein structural studies at several DOE national laboratories, and massively parallel supercomputing, all of which were starting to influence biological science and medicine. These winds of change, sweeping across the scientific landscape, were mostly unnoticed in biomedical circles.

Notwithstanding this neat and well-known division of responsibilities between the two agencies, DOE was no stranger to biology. The roots of biological research at DOE went deep, all the way back to the end of World War II. In 1946, just before the birth of the Atomic Energy Commission from the Manhattan Project, Eugene Wigner, a physicist at Princeton University and a Nobel laureate, was persuaded to take over Oak Ridge National Laboratory as director and to create a new kind of focused haven for scientific research. One of the first things he did was to hire Alexander Hollaender to build a biology division, in part to study the biological effects of radiation. Hollaender chose to build the research effort around genetics, and the study of the genetics of fruit flies, plants, and fungi became an Oak Ridge focus. In 1947 he hired William and Liane Russell from the Jackson Laboratory in Bar Harbor, Maine, to initiate a study of mouse genetics at Oak Ridge. By examining the genetic effects of radiation exposure, they soon discovered that the human exposure standards, which were based on fruit fly experiments, were far too high. The mice were more than 10 times more sensitive than flies to radiation mutations. For a while Oak Ridge was home to the largest biological research laboratory in the world, and it was richly productive. The Russells made a number of advances in mouse genetics, but there was much more. The division’s scientific credits include the discovery of the electronic nature of energy transfer in photosynthesis and Bill Russell’s seminal inference from genetic responses to the same doses of radiation exposures delivered quickly or slowly that a DNA repair mechanism must exist. Later on, in 1964, Richard Setlow discovered excision repair of DNA at Oak Ridge, and his student Philip Hanawalt subsequently worked out many of the implications of this seminal work. One of the most momentous accomplishments of the early era at Oak Ridge was the discovery of messenger RNA in 1956 by Elliot Volkin and Larry Astrachan (although they didn’t identify it as carrying the information from the DNA.) Nobel laureate Paul Berg said of this seminal experiment that it was an “unsung but momentous discovery of a fundamental mechanism in genetic chemistry,” and “has never received its proper due.”

Wigner also established a medical division at Oak Ridge, to focus more on medical effects than fundamental mechanisms. This sister division thrived as well, researching, for example, both the effects of radiation in inducing cancer and in treating it. One of the division’s luminaries, Arthur Upton, later became director of the National Cancer Institute.

The irony that DOE was positioned to initiate a genome-like project, largely by historic accident, actually reveals a deep principle worth noting. Preexisting diversity, created for any reason, can make transformations possible that might otherwise be unlikely. This echoes the themes of Darwinian evolution: diversification, selection, and amplification. At an organizational level, realization of this principle requires acute awareness, acceptance, and a philosophy fundamentally open to seizing unexpected opportunities for innovation.

The irony that DOE was positioned to initiate a genome-like project, largely by historic accident, actually reveals a deep principle worth noting. Preexisting diversity, created for any reason, can make transformations possible that might otherwise be unlikely.

Develop guidelines for vetting and responding to transformative ideas that cut across multiple scientific organizations and disciplines. Interagency partnering on large projects and strong lines of communication are now common. In particular, the National Science and Technology Council’s Committee on Science used by the Clinton, second Bush, and Obama administrations played an important role in coordination, especially in the neurosciences, and to an increasing extent in microbiome research, but its representation has tended to be focused on human health. Although participation could well be broadened, the helpful and encouraging voice of an organization such as the Committee on Science is a striking contrast to the profound silence that emanated from the Office of Science and Technology Policy, the executive branch’s central science and technology arm, when the HGP was launched.

This progress notwithstanding, the transformation is incomplete. The clear articulation of an inclusive vetting process for new and transformative ideas, and more focus on how to foster innovation, is still needed. And although interagency coordination on the HGP was established within only a few years of its inception, it happened in response to congressional influence and was contentious and far from optimal.

NIH was initially opposed to getting involved in the project, and its director, James Wyngaarden, was hesitant to move too far ahead of a divided community. Some of the key NIH advisers and several eminent scientists strongly advised Wyngaarden that NIH needed to support this effort on its merits and to assume its ownership, which had fallen to DOE by default because Senator Domenici had introduced a bill to start a national project under the aegis of DOE. Wyngaarden eventually agreed to support the HGP, and several congressional friends of NIH—notably Senator Edward M. Kennedy of Massachusetts, chair of the Senate Committee on Labor and Human Services, and Senator Lawton Chiles of Florida, a member of the Senate Appropriations Committee and chair of the subcommittee responsible for NIH—were mobilized to deal with Domenici’s bill. Early in 1987, Wyngaarden endorsed the HGP officially in congressional testimony. Domenici’s measure was soon absorbed into an omnibus biotechnology bill that died in committee. Finally, the start-up funds were appropriated to both NIH and DOE for human genome research in fiscal year 1988. The interagency Human Genome Project was born.

Develop effective processes for public-private partnering. Although interagency coordination is now much stronger than it was four decades ago, meaningful partnering and collaborative mechanisms need more development beyond the government. In the early genome era, there was one example of public-private interaction that stood out. The Howard Hughes Medical Institute (HHMI) played a critical catalytic role by engaging a number of key scientists in discussions. For example, James Watson met regularly with George Cahill, director of research for HHMI. HHMI also funded a number of meetings involving university scientists and people from several government agencies to discuss the issues surrounding the initiation of a genome project, and it provided initial funding for genetics databases, including OMIM, a continuously updated catalog of human genes and genetic disorders and traits. We are unaware of any organizations that encouraged regular communication, collaboration, and mutually beneficial partnerships among private enterprises, nonprofits, and academia, or any mechanisms to enable it.

In this brief essay we cannot begin to analyze the complexities and potential problems in partnerships between the government and for-profit organizations, but there is much potential to be gained in forming such partnerships. In the past few decades limited partnerships at the level of individual scientists from all sectors have become possible and are flourishing, whereas 30 years ago collaboration between a government scientist and an industrial researcher was routinely disallowed. Solutions to this problem evolved naturally as the benefits were recognized. A problem that has yet to be solved is routine and timely access to data, but its importance has been recognized. Many scientists, us included, believe that data generated at public expense should be released without significant delay, after quality assurance. This is relevant to for-profit organizations, universities, and nonprofits as well. Forty years ago, universities were not in the habit of patenting their intellectual property or spinning off new ventures, as they are now under the influence of the Bayh-Dole Act. As this trend continues, the pressure to sustain periods in which data are proprietary may also increase. We raise these issues to emphasize that considerable thought and attention is required in order to enable and encourage socially beneficial practices among all research institutions.

The private sector played a significant role in the HGP, including the development by Applied Biosystems Inc. of the automated capillary DNA sequencers that eventually generated the data for the first human genome sequence. Without this technology, it would have taken many more years to generate the data.

Whereas collaborations among scientists in different sectors should certainly be encouraged, as should collaborations among groups that provide complementary expertise, there are occasions in which parallel competitive efforts are useful and perhaps inevitable. The history of the HGP provides an example of the importance of such a parallel effort, although the way it played out was far from optimal.

Perhaps the most important industry role in the HGP was that of Celera, a private company led by J. Craig Venter, which was a key part of the end game of the project. Although there were ultimately two genome projects—one public (NIH and DOE), including international efforts, and one private (Celera)—there was substantial mutual benefit. The Celera effort was initially seeded by DOE’s Office of Health and Environmental Research, but the effort was organized, funded, and led independently of the public project, though many scientists with long involvement in the project contributed by joining the Celera effort or becoming advisers. In retrospect, the HGP is not likely to have been completed as early as it was without the massive effort by Celera using some newly devised methods.

There was significant disagreement within the scientific community about the relative merits of the systematic mapping and sequencing effort adopted by the public project and the “shotgun” sequencing and subsequent assembly approach followed by Celera. The stimulus to the federal effort provided by an independent private effort was, in our view, substantial. Ultimately, the initial reference genome, to which both contributed, was released sooner and with more data than would have been the case with either working alone.

Although there were specific criticisms made of each team by the other, it is now clear that both strategies worked. The overall goal of providing a huge amount of useful genome data to the community in a short time was greatly served by the parallel efforts. Finally, we must note that one of the general hallmarks of good science is an effective balance between competition and collaboration. The HGP demonstrated that this is possible, if difficult to achieve, even when it involves multiple complex organizations.

Develop a process that could move quickly to evaluate and establish large, transformative inter-organizational projects. It is worth considering where large, transformative projects come from. In all cases, ingredients include advances in science and technology as well as advocacy by a critical number of scientific leaders. Beyond those, the flame might be lit by an agency (as with the HGP) or fueled by conflict (the Manhattan Project), or by a combination of both (the space program) or perhaps by an unanticipated global crisis. A general process for casting a wide net and for soliciting, triaging, vetting, and facilitating the development of potentially transformative ideas across the biological science spectrum is worth developing. Perhaps the National Academies of Sciences, Engineering, and Medicine could play a significant role in this important process.

The NIH Roadmap is a possible model for soliciting, vetting, and initiating large, innovative, and potentially high-impact research projects that could be scaled up to an interagency level. The Roadmap solicits innovative trans-institute proposals that are then subjected to a multistep vetting process. To be applicable to science fields beyond the biomedical, however, it would need modification to account for the different community cultures and government agencies involved. Precisely how that could be done is well beyond the scope of this article, but we believe that what we have learned from the HGP may hold significant value for such future plans. In any case, clarity of purpose, faith in the creative resourcefulness of the scientific community, and a rich diversity of ideas has significant value for large, ambitious projects. Indeed, perhaps the most important lesson from the success of the HGP is that the scientific community’s creativity, organizational skills, and ability to cooperate and solve seemingly impossible problems should never be underestimated.

The NIH Roadmap is a possible model for soliciting, vetting, and initiating large, innovative, and potentially high-impact research projects that could be scaled up to an interagency level.

It is obvious that cooperation at all levels—among individual scientists, among consortia, among federal agencies whose missions have sometimes blurred boundaries, between the public and private sector, and among nations whose interests were not always fully aligned—contributed to the success of the HGP. Considering the rapidity with which the project came to the scientific and political arena, some trauma is not surprising. What is remarkable in retrospect is the boldness of the project, and even more, the rapid adaptation and cooperation by a biomedical research community that was traditionally conservative, by very different government agencies, and by a Congress with tight budget constraints. Its success is perhaps a good example of the mysterious wisdom of crowds, interacting openly. The strong cooperation that developed between NIH and DOE was essential to the testing of ideas, the fading of opposition, the marshaling of essential resources, and the strong support of Congress. The result was a remarkable contribution to human knowledge, a practical success and an example of a complex set of interactions that almost didn’t happen, but then really worked. The lessons of the HGP may usefully inform current efforts such as the Precision Medicine Initiative and the Cancer Moonshot.

A lesson that should not be drawn from the HGP experience is that every big, bold idea deserves support. The mechanisms that we propose to facilitate action on good ideas will also be useful in marshaling the rigor and insight needed to protect us from pursuing bad ideas. Open debate among experts from many disciplines, across federal agencies, and from many sectors of the economy is the best way to filter out the weak proposals as well as to build the foundation for cooperation on the most promising.

David J. Galas is principal scientist at the Pacific Northwest Research Institute in Seattle, Washington; Aristides Patrinos is New York University Distinguished Industry Professor of Mechanical and Biomolecular Engineering; Charles DeLisi is the Metcalf Professor of Science and Engineering at Boston University. Each of the authors served as director of biological and environmental research at the Department of Energy during the initiation and operation of the Human Genome Project.

 

Recommended reading

G. Cahill and D. R. Hinton, “Howard Hughes Medical Institute and its role in genomic activities,” Genomics 5, no. 4 (1989): 952-954.

E. Green, J. A. Watson, and F. S. Collins, “Human Genome Project: Twenty-five years of big biology,” Nature 526 (2015): 29-31.

Daniel J. Kevles, “Big Science and Big Politics in the United States: Reflections on the Death of the SSC and the Life of the Human Genome Project,” Historical Studies in the Physical and Biological Sciences 27, no. 2 (1997): 269-297.

E. Lander, et al., “Initial sequencing and analysis of the human genome,” Nature 409 (2001): 860-921.

National Research Council, Mapping and Sequencing the Human Genome (Washington, DC: National Academies Press, 1988).

S. Tripp and M. Grueber, “Economic Impact of the Human Genome Project,” Battelle Memorial Institute (2011).

Robert A. Weinberg, “The case against gene sequencing,” The Scientist (16 Nov. 1987).

J. C. Venter, et al., “The Sequence of the Human Genome,” Science 291 (2001): 1304-1351.

E. Zerhouni, et al., “The NIH Roadmap,” Science 302 (2003): 63-72.

Coordinated Action Against Climate Change: A New World Symphony

A systems approach begins with limiting greenhouse gas emissions and adapting to unavoidable climate disruptions, while researching the feasibility and governability of geoengineering.

According to the Intergovernmental Panel on Climate Change’s latest report, issued in 2014, any plausible path for reducing global greenhouse gas emissions that would keep the Earth from warming by more than 2 degrees Centigrade will require direct interventions to modify the atmosphere—that is, geoengineering. This conclusion obviously applies even more starkly to the aspirational goal of limiting warming to 1.5 degrees or less that was adopted at the 2015 United Nations climate conference in Paris. These targets indicate that we should be prepared for a future where deployment of technologies to intentionally modify the global climate for human benefit will be contemplated, and thus in turn that we need to know more, and soon, about if and how such modifications might work.

Calls for research on geoengineering have consequently been finding their way into reports from high-level scientific and policy organizations, including the US National Academy of Sciences, the UK Royal Society, and the Bipartisan Policy Center. For the first time, the US Global Change Research Program’s strategic plan, currently in review, calls for research on geoengineering and highlights specific issues for research.

Since the Nobel Prize-winning atmospheric chemist Paul Crutzen wrote his famous article in 2006 pointing to the possibility that humans could deliberately cool a warming Earth, scientists have focused on two possible classes of geoengineering technologies. Solar radiation management (SRM) technologies either reflect some of the radiation coming from the sun—for example, with particles injected in the stratosphere or clouds—or remove radiation-trapping barriers such as cirrus clouds to allow more radiation to leave the Earth. Carbon dioxide removal (CDR) technologies remove carbon dioxide and other greenhouse gases from the atmosphere to set the clock back on climate change.

Unlike mitigation and adaptation, which can be pursued at independent national and even local levels, the character of geoengineering intervention—whether SRM or CDR—is fundamentally strategic at the global level. Climate intervention would require developing the elements of a strategy: setting a global goal for the intervention, choosing specific actions, and developing methods for monitoring the results and mechanisms for changing course as more information becomes available.

These ideas—especially for SRM, which might be relatively inexpensive and fast-acting—remain controversial. One common ethical concern about geoengineering, known as the “moral hazard” problem, could result in people thinking that geoengineering can relieve us from the need to mitigate emissions by decarbonizing the global energy system. In truth, SRM-type technologies could not safely keep up with an ever-growing concentration of greenhouse gasses in the atmosphere. Without mitigation, the amount of radiation imbalance would continue to grow, so the amount of intervention required to keep temperatures below a specific limit would also grow. But SRM does not perfectly cancel the impacts of continued emissions. Attempting to counteract an ever-increasing greenhouse gas effect through commensurate efforts to artificially deflect solar radiation will lead to larger and more dangerous departures from known climate states. Thus, mitigation constitutes a prerequisite for practical SRM. Geoengineering should never be thought of as an independent technology.

The climate intervention endeavor should be part—and only part—of a symphony of actions harmonized for managing the global environment. The most important strategy remains eliminating greenhouse gas emissions. Stopping emissions does not equate to adding up how much solar power has been added to the grid, and it certainly does not equate to how much nuclear power is taken off the grid. It means stopping emissions. Adaptation is crucial, as well. The flooding, droughts, ecological damage, and fires exacerbated by climate change will have to promote increased attention to creating resilience, resisting the changes, or retreating from problematic regions. Climate engineering may augment these foundational instruments; it will not replace them.

The research community largely knows that geoengineering as currently defined makes no sense without a vigorous attempt to mitigate greenhouse gases, and that adaptation will be necessary as well. However, researchers may have—likely unwittingly—played into the slippery slope concern by differentiating different types of geoengineering according to how they perceive the governance issues. For example, the recent National Academy of Sciences report on geoengineering was separated into a report on SRM and another on CDR because researchers see these as distinct technologies having distinct governance requirements. Although a split in governance may serve the interests of science projects, it forfeits an opportunity to think about a holistic climate strategy and may not be in the best interests of society. Keeping research on geoengineering firmly in a comprehensive context, including mitigation, adaptation, CDR, and SRM, should help protect society from the moral hazard and promote strategic thinking.

The latest set of agreements in Paris made a major advance by establishing the goal of controlling temperature. But the world is far from agreeing on the means to reach that goal. In fact, each country will propose its own means, with no guarantee that a successful global approach will emerge. But contemplation of climate engineering invites consideration of an overall strategy on climate. For example, if mitigation proceeds, but not fast enough to avoid dangerous effects of warming, SRM may help to cut off the peak of the problem. However, wise deployment of SRM would have to be predicated on an end game for stopping deployment. If the time comes when we have finally stopped emitting excessive amounts of greenhouse gases, but the persistent atmospheric concentrations of carbon dioxide remain too high for comfort, then we could deploy SRM only until a long, slow effort to lower atmospheric concentrations using CDR technologies makes it safe to stop SRM.

Research and governance

But we don’t yet know if geoengineering technologies will work. To even begin to take seriously any such strategic approach for addressing climate change, we will therefore have to do research. And given the high stakes and controversial nature of geoengineering, research programs will have to be accompanied by a well-structured governance approach to ensure that policymakers and the public alike have confidence in the science and its implications for action. To push the symphonic metaphor perhaps a bit too far, effective research governance will be required to ensure that the instruments of atmospheric management are in tune. Such governance should be guided by a small number of principles.

Given the high stakes and controversial nature of geoengineering, research programs will have to be accompanied by a well-structured governance approach.

International collaboration focused on monitoring and results-sharing can provide a good path forward. In the past, international collaborative research has helped to establish international policy on difficult subjects. For example, international cooperative research demonstrated that geophysicists could detect any nuclear weapons tests conducted anywhere, anytime, and this capacity in turn enabled ratification of the nuclear test ban treaty. Similarly, international cooperative research on detection and attribution of deliberate climate interventions would be a strong starting place for building trust into international discussions of geoengineering options.

The Intergovernmental Panel on Climate Change has been encouraging international collaboration through model inter-comparison projects focused on SRM, and there is additional important work to do using numerical models and thought experiments. Collaboration could also come about if a few countries start their own research programs and then work together. Researchers have already proposed small experiments to illuminate and delimit physical, chemical, and biological processes that underlie model assumptions. Individual nations could fund these experiments, but early international collaboration would point toward the eventual goal of internationalized research and commonly held results.

Starting research governance simultaneously with outdoor research allows governance skill to grow along with scientific knowledge and will help to ensure that governance for more difficult problems is not left until the last minute.

The early experiments should start small and be limited to those that present negligible risk of perturbing the climate. Any larger-scale or more risky experiments in geoengineering should be considered only later, if at all.

Coordinate scientific and governance learning. If at some time in the future there were significant scientific gains to be made with experiments that crossed national boundaries or posed some risk, even if relatively small, imagine how difficult it would be to govern these if none of the basic elements of governance had been assembled and exercised beforehand. Starting research governance simultaneously with outdoor research allows governance skill to grow along with scientific knowledge and will help to ensure that governance for more difficult problems is not left until the last minute.

External and independent advisory groups can help scientists to articulate clearly the key questions they are trying to answer and facilitate deliberation with the public. For example, scientists might be trying to answer the question “Is this technology effective?” or “Is this technology safe?” The purpose of the proposed experiment might be to answer the question “How does a specific mechanism that affects efficacy or safety work?” Citizens and policymakers might agree (or not) that they would like the answers to these questions, but they might also have their own questions. A scientist may want to determine how small particles coalesce into larger ones. A citizen may want to know which chemicals would reach breathable air. Dialogue can help to focus and articulate research questions and help scientists to better understand and thus explain (or modify) their own priorities. This may sound easy, but in practice it takes time and attention. Even if an engaged public agrees with the research questions, they may challenge the need for the proposed experiment to answer them.

Public and policymaker engagement in setting the goals for research can be one of the most important interfaces between science and society, even in early research, especially if these goals can be articulated in terms of questions people have. For example, just as nuclear test ban treaties were enabled by answering the question “Can nuclear tests be detected?” policymakers will likely have similar questions about detection and attribution for geoengineering research. If we decide to deploy an SRM technology and a country subsequently experiences drought, can scientists say how much the geoengineering technology had to do with that drought? If the answer is “yes,” the policy discussion will be quite different than if the answer is “no”—or, perhaps most likely, “we’re not sure.” The answer will also have large implications for a decision about whether to deploy and if that choice is made, what forms of governance should apply to that deployment.

Mission-driven research should inspire creativity in intervention concepts, select the best ideas, and then systematically determine their effectiveness, advisability, and practicality in the context of mitigation and adaptation.

Assure the quality and reliability of the research. No matter how long we work on these problems, we will never have the ability to precisely engineer an intervention. The models developed to explore possible responses to intervention cannot be validated at full Earth scale in double-blind tests. We can only hope and expect that research will increase confidence that we can (or cannot) move the Earth system in a beneficial direction. Some confidence could accrue if, over time, researchers are increasingly able to predict the results of their experiments a priori. The discipline of prediction and comparison of the prediction with results should be part of all geoengineering research. Twenty-twenty hindsight has much less impact on confidence-building than a priori prediction and ex post facto comparison of predictions and actual outcomes.

The principle of transparency comes up often as important for geoengineering research. But transparency involves much more than revealing the experimental plan and releasing data. Meaningful transparency has to enable dialogue and deliberation, so it must include revealing the intent of experiments: What is the experiment trying to achieve and why is the experiment the best way to get there? What is the quality of the information forming the basis of the experiment? How did the actual results differ from the hypotheses? What was learned from the differences and what could be done next and why? An advisory group can help guide researchers through the transparency process and react to the reporting.

The normal process of scientific peer review may do well to ensure the reliability and veracity of individual research papers, but if any geoengineering concepts start to undergo serious research, review and assessment of the work will need to recognize that engineering the climate involves more than stand-alone research papers. The completeness of the study and accuracy of the assessments might be addressed by funding teams of researchers whose job it is to find out what might be missing or wrong.

Organize research around the mission, not the scientific opportunities. Geoengineering is fundamentally engineering—that is, a solution designed to solve a problem—and in this case the problem involves a system of many interrelated processes and issues. An engineering project, particularly one of this magnitude and complexity, requires a systems approach. Research should both define the critical elements of the system and investigate the total system response to intervention. Haphazard investigation driven by scientific curiosity is unlikely to take up all the elements of a systems approach and may even be unethical given the stakes involved in climate intervention.

Mission-driven research should inspire creativity in intervention concepts, select the best ideas, and then systematically determine their effectiveness, advisability, and practicality in the context of mitigation and adaptation. Design of a mission-driven climate engineering research program provides practice in skills critical for managing any possible future deployment of such technology. Re-invention of mission-driven research for geoengineering should learn from previous shortcomings, such as narrow control and poor communication, reckless disregard for collateral damage, and lack of public engagement.

Sweden’s nuclear waste program presents an excellent example of successful mission-driven research on a controversial subject done with strong international collaboration and public interaction. The program linked technical and managerial task groups with ideas about project requirements and strategies, and engaged communities that were plausible site candidates. A “safety case” described simply what the requirements for a site would be and why scientists thought a site that met these requirements would be safe. Only sites that met the criteria were selected for further characterization, and that proceeded only if municipalities agreed. The safety case was updated regularly so that it was easy to see that confidence in the concept was increasing. Repeated interactions with citizens in affected communities led to project acceptance. Sweden now has a licensed nuclear-waste repository in a community that asked for it. Many of these same concepts would work for mission-driven geoengineering research.

One important distinction for geoengineering research is that the mission should not be to deploy a geoengineering concept. The goal for climate intervention research must be to understand the potential efficacy, advisability, and practicality of various concepts in the context of mitigation and adaptation. This means that concepts should always be evaluated relative to plans and projections about mitigation and climate effects. Importantly, this also means that a research institution investigating a specific concept would fulfill its mission if it found that the concept was a bad idea. We do not currently reward scientists or research institutions for identifying bad ideas. The reward structure for institutions conducting mission-driven research must reflect the societal benefit desired from the program, including eliminating an inadvisable concept.

Harmony imagined

The extraordinary difficulty of moving the global economy to eliminate greenhouse gas emissions is beginning to force serious attention on geoengineering. This attention in turn compels us to think about climate change in ways that are both strategic and global. It allows us to imagine how a symphony of harmonized actions might be necessary for assuring the long-term well-being of humanity. We might, for example, eliminate emissions as fast as we can, adapt as well as we can, use solar radiation interventions for a limited period to buy more time for mitigation while preventing climate changes that create insurmountable challenges, and then find ways to remove the troublesome gases from the atmosphere and conduct the Earth back to a more stable, livable climate so that radiation interventions can be stopped.

Strategic thinking at this level creates a social imperative to begin learning more about geoengineering and to govern the necessary research in ways that assure the confidence of the public and policymakers. Thus, even if humanity has the good fortune never to have to deploy geoengineering, the contemplation of these technologies is beginning to provide us with an opportunity to come to terms more holistically with the nature of the challenge that we face.

Jane C. S. Long, now retired, was formerly associate director for energy and environment at Lawrence Livermore National Laboratory and dean of the Mackay School of Mines at the University of Nevada, Reno.

Recommended reading

Bipartisan Policy Center’s Task Force on Climate Remediation Research, Geoengineering: A National Strategic Plan for Research on the Potential Effectiveness, Feasibility, and Consequences of Climate Remediation Technologies (Washington, DC: Bipartisan Policy Center, 2011).

P. J. Crutzen, “Albedo enhancement by stratospheric sulfur injections: A contribution to resolve a policy dilemma?Climatic Change 77 (2006): 211.

Anna-Maria Hubert, Tim Kruger, and Steve Rayner, “Geoengineering: Code of conduct for geoengineering,” Nature 537 (2016): 488.

Sabine Fuss, et al., “Betting on negative emissions,” 2014, Nature Climate Change 4 (2014): 850-853.

Jane C. S. Long, “Piecemeal cuts won’t add up to radical reductions,” Nature 478, no. 429 (2011).

Jane C. S. Long and Jeffrey Greenblatt, “The 80% Solution: Radical Carbon Emission Cuts for California,” Issues in Science and Technology 28, no. 3 (2012).

Jane C. S. Long and Dane Scott, “Vested Interests and Geoengineering Research,” Issues in Science and Technology 29, no. 3 (2013).

National Research Council, Climate Intervention: Carbon Dioxide Removal and Reliable Sequestration (Washington, DC: National Academies Press, 2015).

D. E. Winickoff and M. B. Brown, “Time for a Government Advisory Committee on Geoengineering Research,” Issues in Science and Technology 29, no. 4 (2013).

 

 

Confronting the Crisis in Higher Ed

It is hard to believe that only four and a half years ago, the New York Times proclaimed 2012 “The Year of the MOOC.” What a difference a day makes. Three of the five books covered in this review take the MOOC, or massive open online course, phenomenon and the revolution that it seemed to promise as their jumping-off point. But digital learning technology, and the discourse about it, are moving so fast that even these careful, well-intentioned books can already seem somewhat dated. All of the books share a sense of crisis in higher education—of swiftly changing economics, technology, and social context—and of an urgent need for reform. Yet the argument that some pundits make—that the mass availability that digital learning technologies seem to offer will solve the cost crisis in higher education—is far too facile, as the best of these books recognize.

MOOCs, High Technology & Higher Learning (Web)

In MOOCs, High Technology, and Higher Learning, Robert A. Rhoads places the OpenCourseWare (OCW) movement (which includes the development of MOOCs) into a historical and organizational context. Indeed, one of the many merits of Rhoads’s book is his sociological analysis of the OCW phenomenon. He argues that the organizational system that developed in the OCW and MOOC movements arose within the context “of high demand for higher education, reduced or stagnant governmental funding, advances in Web technologies, and a powerful mix of public good and private enterprise interests.” He is particularly good at probing the complexity of public and private ambitions in the two movements—the revolutionary purpose some saw them serving (by increasing higher education access) and opportunities for profit the new industry offered.

Rhoads’s distinctions between kinds of MOOCs are especially helpful. He distinguishes between xMOOCs, which are essentially webcast versions of classroom courses, and cMOOCs, which have connective and interactive elements. Rhoads is critical of xMOOCs, arguing that there is no strong evidence that they represent an improvement over face-to-face instruction. Rhoads is committed to the work of the Brazilian educational theorist Paulo Freire. As a Freirian, Rhoads puts democratic dialogue at the center of education, in which students engage in the process of knowledge critique and construction, learning political engagement in the process. By their very nature, xMOOCs do not contain such dialogue, whereas cMOOCs can.

Rhoads is particularly interesting on the problems created by the ways in which elite universities have dominated the OCW and MOOC landscape, although he could probe this phenomenon even more deeply than he does. What were the motives of places like Harvard, MIT, Yale, and Columbia in investing so many resources in MOOCs, from which it would seem they would have little return? But Rhoads asks important questions: “Do we really want superstar faculty from elite universities teaching masses of students at underfunded colleges and universities through the use of recorded lectures? Is brick-and-mortar education to be reserved for the wealthiest of students while the rest are to be ‘MOOC’ed’?” He is particularly alert to the implications of the MOOC movement for diversity and for faculty life and labor, calling for careful attention to both of these subjects.

Rhoads is an extremely good writer, and MOOCs, High Technology, and Higher Learning is a useful book. Reflecting the time of its composition, the book tends to focus on the stand-alone course, developed by a single professor, for an undergraduate audience. He gives little attention to the burgeoning market for online professional degrees and certifications, some low residency, some entirely digital in their delivery. Nonetheless, the book has much to offer anyone seriously interested in digital learning technologies. Rhoads provides many useful policy recommendations about the levels of technological skill and content-based knowledge one might reasonably expect of particular populations of students, about diversity considerations, and about faculty labor and engagement. The book is full of helpful distinctions and probing questions, and it is not one page longer than it needs to be—a rare distinction.

Revolution in Higher Education Book Cover (Web)

The same, unfortunately, cannot be said for Richard A. DeMillo’s Revolution in Higher Education: How a Small Band of Innovators Will Make College Accessible and Affordable. DeMillo takes as his starting point the “magic year” of 2012, when his small band of innovators invented the MOOC. Whereas Rhoads is a splitter, DeMillo is a lumper, assimilating many rather different phenomena and projects into a master narrative. The shape of the narrative is familiar: “Colleges and universities are in financial crisis,” DeMillo writes. “Tuition rises inexorably. Graduates of reputable schools often fail to learn basic skills, and many cannot find suitable jobs. Meanwhile student-loan default rates have soared while the elite Ivy and near-Ivy schools seem remote and irrelevant.”

To rescue us from this terrible predicament comes a band of entrepreneurs who bring the technology revolution to higher education. Although DeMillo’s book is both wordy and digressive, it assimilates all of its anecdotes and figures into a breathless heroic narrative in which a group of pioneers will rescue higher education from itself. The book is full of melodramatic reportage but little differentiating analysis. DeMillo doesn’t probe why some projects—such as Columbia’s Fathom, the early-2000s online learning experiment that lasted less than three years—fail. And he assimilates quite different enterprises, such as Coursera and the Minerva Project. He takes all of his pioneers at face value.

DeMillo is a bit of a magpie. His book has all kinds of anecdotes and pieces of education history, including the invention of the blackboard, Stanley Fish’s transformation of the Duke University English Department, and Edwin Slosson’s 1910 effort to rank colleges and universities, making the book both sprawling and single-mindedly didactic. For example, he uses the nineteenth-century introduction of the blackboard as an illustration of technological revolution and the hype cycle that accompanies it; he then gives us two pages on the history of the railroad in England to make a similar point, all of this to provide evidence for the transformational impact of technology in higher education.

Technology indeed is having and will continue to have a transformational impact on higher education with consequences for its structure that are difficult to anticipate. There is all the more need, therefore, for careful analysis and differentiation of the many projects and ventures in this space. Despite its prophecies of the future, DeMillo’s book is oddly dated; it seems a product of the year of the MOOC, and hasn’t moved beyond it.

The War on Learning Book Cover (Web)

Elizabeth Losh, the author of The War on Learning: Gaining Ground in the Digital University, clearly could not have read DeMillo’s book (they were published the same year), but it would have given her rich material for her analysis. Losh defines herself as a scholar of “digital rhetoric,” and she analyzes not only technological innovations in teaching and learning, but the rhetoric about them. Losh is a skeptic about what she sees as inflated claims for technological solutions to problems in teaching and learning. “The folly of overvaluing innovation” is one of her main themes.

She turns a critical eye to MOOCs, gamification, badge systems, and iPad distribution, citing empirical research that raises questions about their effectiveness in enhancing learning. She spends a lot of time on failures, trying to understand why various digital tools have not realized the benefits that have been claimed for them. However, she is not a Luddite. She’s hopeful about the opportunities that technologies present for learning, but resistant to totalizing claims and grand visions.

Like Rhoads, Losh puts dialogue at the center of her philosophy of learning. She believes that education is a process, not a product; it must be socially situated, and interactional. She calls herself “a conscientious objector in the war on learning,” waged both by those who seek to use technologies to command and control students and by the advocates of DIY education, or “unschooling,” who seek to defund public education. Her critique of MOOCs resembles Rhoads’s critique: that MOOCs are didactic narrative structures devoid of dialogue. In a lively and amusing analysis, Losh contrasts the success of online cheating how-to videos in subverting conventional tests, observing that what’s right with YouTube culture is that it encourages “participation, creativity, subversion, and satire.”

Losh’s deconstruction of the term “digital natives” is particularly interesting. She argues that the cultural clichés on which the term relies—that all young people have access to networked digital technologies and know how to use them, that they are all connected by a common culture and a set of common practices, and that they all intuitively and easily master software—are both false and destructive, and lead to disenfranchisement. To expose this myth of the digital native, Losh repeatedly urges empirical study of digital learners and learning.

She asks, “How can we influence the digital university to be more inclusive, generative, just, and constructive?” She articulates six principles to answer her question: observing the golden rule in decisions about instructional technology by not employing knowledge-sharing methods that faculty themselves would find highly intrusive; having faculty and students use the same tools; preserving the value of old technologies; making digital learning joyful; making the occasion serious; and not embracing novelty as a value in and of itself. Like DeMillo’s book, Losh’s has its polemical moments, but it seeks to embrace the messiness of a movement in very rapid evolution, tries to understand failures as well as successes, and seeks to derive principles from both.

Reeingineering the University (Web)

William F. Massy’s Reengineering the University: How to Be Mission Centered, Market Smart, and Margin Conscious is not fundamentally about the impact of digital technologies on higher education. As its title implies, the book is about institutional change and how best to achieve it. Massy believes that there is a lot wrong with the contemporary university, including massive failures in the marketplace that keep students from making optimal choices about which university to attend and that do not provide appropriate institutional incentives either for quality improvement or cost control. Other problems include the loss of political confidence in higher education; threats to traditional universities, which Massy describes as “industrialized higher education”; and lack of sufficient reform efforts by administrators, board members, and faculty.

Drawing extensively from business administration and from microeconomics, Massy’s language may make the book seem too wonkish for some and too corporate for others. But these would be superficial judgments. In his preface, Massy quotes a reviewer of his manuscript who describes him as “[h]alf hopeless romantic about the value and high purposes of higher education and half pragmatic engineer focused on cost, efficiency, and metrics.” Massy feels the reviewer got it right, and I agree.

Massy profoundly values the unique character and culture of universities, with their on-campus student bodies, faculty resources, research and scholarship, and nonprofit organizational form. And he understands their complexity: “They produce multiple and nuanced outputs, use highly specialized and often autonomous inputs, and pursue nonprofit goals that are not easily quantified or even described coherently in subjective terms.” Indeed, because he so values the university, he is all the more impatient with its resistance to change, its rigidities and complacencies.

Massy believes that the flaws in contemporary higher education fall into five main categories: the over-decentralization of teaching; “unmonitored joint production” of teaching and research; dissociation of educational quality from cost; lack of good learning metrics; and over-reliance on market forces. Universities are perhaps unique among organizations that offer a creative service to their consumers in the degree of independence that faculty—the providers of this service—have in their work. Massy puts it this way: “decisions about how much [teaching and research] to produce lie mainly in the hands of individual professors, with relatively little oversight from department chairs—let alone deans and provosts.” In the university that Massy envisions, there would be focused attention on the continuous improvement of teaching, dependent on learning metrics. (Massy doesn’t have much to say about research.) He advocates far more rigorous measurement of teaching activity and putting such teaching activity data in the hands of faculty.

The topics covered in Massy’s chapters give a good sense of his focus—the reengineering challenge, the new scholarship of teaching, the cost of teaching, and financial planning and budgeting. He argues for a culture of continuous improvement, in which peer review is built into all reengineering efforts. His chapter on financial planning and budgeting is particularly useful, both in the dashboards it provides for presenting financial data and in its trade-off model for balancing mission and margin. I’m not certain that any reader not engaged in some way in running a university will follow Massy through his detailed exposition of tools and metrics, but for those readers who are so engaged, Massy’s book offers much valuable insight.

Lesson Plan Book Cover (Web)

Lesson Plan: An Agenda for Change in Higher Education, by William G. Bowen and Michael S. McPherson, has a different audience in mind. This short text—only 140 pages—is perhaps the wisest of recently published books on higher education. The book does not pretend to original research; rather, it collects together what is known and can be inferred about the current situation in American higher education. Like Massy, the authors believe that university leaders and boards are not doing enough to come to grips with the challenges higher education is facing.

Not surprisingly—Bowen and McPherson are economists—Lesson Plan is particularly insightful about the cost of higher education. The authors argue that the central question about college finance is not how to make college “free” (or appear to be free), but how to share the costs equitably. They observe that paying for college has always been a joint responsibility of families, governments, and philanthropy; the question is how to share this responsibility fairly. They differentiate cost reduction from cost shifting, noting that costs have clearly shifted in recent decades from the state to the family, consequently limiting college opportunities for the least well-off.

Despite their critique of this cost shifting, the authors do not think affordability is the biggest issue in higher education today. Completion rates and the length of time to achieve a degree, particularly among students of low socioeconomic status, are more significant problems. The degree attainment gap between the wealthy and the poor is increasing and the rungs of the ladder to success have moved further apart. This is a consequence, in part, of reductions in public funding. Classes are less available, particularly at resource-challenged institutions, and student employment has risen dramatically as a result. The authors make a number of compelling recommendations about federal financial aid policy, arguing that it should be tied to timely college completion and to student success.

Ultimately, Bowen and McPherson argue that faculty roles in making decisions about resource allocation and in determining teaching methods need to be rethought. “Advances in technology require investments in teaching technologies and decisions about staffing patterns that more and more often transcend departmental and even institutional boundaries,” they note. “Aspects of governance structure need to evolve away from vertical models, centered on departments, to horizontal models that focus on achieving a combination of educational effectiveness and cost efficiencies.” They end their book with a call for stronger leadership on the part of university presidents and boards. Bowen and McPherson have written a wise book, and a hopeful one, which engages the many challenges facing higher education while charting a path forward.

Forum – Spring 2017

The infrastructure challenge

In “Infrastructure and Democracy” (Issues, Winter 2017), Christopher Jones and David Reinecke remind us that infrastructures have historically been inaccessible to many people in the United States, particularly those living in poor and rural communities. By tracing the development of US railroad, electrical, and Internet networks, the authors show that many infrastructures are not democratic by design, but made accessible through citizen activism and organizing. In the nineteenth century, for example, Americans demanded railroad regulation. In the twentieth century, communities self-organized to extend electricity to unserved areas. Today, as problems associated with aging infrastructure (crumbling bridges and dams) mount, the federal government is poised to pursue infrastructure spending dependent on private investment. If projects are motivated by revenue rather than the public good, it seems likely that historical problems of equity and accountability could be repeated in terms of what is built (toll roads, not water pipes) and who is served (affluent urban areas, not poor and rural communities).

By analyzing the infrastructure problems of the past, the authors provide an illuminating and much-needed perspective on the present. But as I read, I began to wonder if the ambiguity of the first word in the article’s title—infrastructure—might be antithetical to more public access and accountability. When the word infrastructure was adopted in English from French in the early twentieth century, it was a specialized engineering term referring to the work required before railroad tracks could be laid: building roadbeds, bridges, embankments, and tunnels. After World War II, it was reimagined as a generic bureaucratic term, referring to projects of spatial integration, particularly supranational military coordination (NATO’s 1949 Common Infrastructure Programme) and international economic development. It was not until the 1970s, if general English language dictionaries are indicative, that the broad current usage of the word—physical and organizational structures that undergird a society or enterprise—became stabilized. Paradoxically, that same decade saw the decline of the ethos of state-led infrastructure management and universal service provision in favor of privatization.

Infrastructure now refers to all kinds of projects built for purposes that include transportation, communication, security, health, finance, and environmental management. It wasn’t always so all-encompassing. In fact, neither nineteenth-century railroads nor early-twentieth-century electrical networks were called infrastructure during those eras. That said, my concern is neither historical anachronism nor vague terminology, per se, but the fact that the word infrastructure has displaced some alternatives, such as social overhead capital and public works, that emphasize broad access and the public good rather than generating revenue. Jones and Reinecke rightly emphasize that communities must mobilize, make demands, and hold providers accountable in order to democratize infrastructures. Society might also re-democratize its terminology. After all, why should a private oil pipeline and a municipal water system be labeled with the same French engineering term? Maybe we should even disaggregate the single word infrastructure, replacing it with a more heterogeneous and specific group of terms that foreground what is to be built and whom is to be served. I am particularly fond of public works.

Ashley Carse

Assistant Professor, Department of Human and Organizational Development
Vanderbilt University

Advancing clean energy

The Winter 2017 edition of Issues presents an array of articles addressing the expected clean energy transition. In “Inside the Energiewende: Policy and Complexity in the German Utility Industry,” Christine Sturm provides a thorough critique of that nation’s energy policy, showing the problems it has caused for utilities and the financial costs it has imposed on consumers. While I have no doubt that those problems are real, she does not give enough credit to the German government for its efforts to address them.

As Sturm points out, Germany has numerous policies pushing for a transition to a low-carbon energy system, but the one that gets the most attention and that has most contributed to rapid deployment of wind and solar facilities is the feed-in tariff, which requires that utilities must connect any and all renewable electricity generators to their grid and pay those systems a premium price for the electricity they generate, depending on which technology they use. Those premiums, though they have gone down since 2000, are lavish by US standards and have pushed up the costs of electricity to ordinary consumers quite dramatically.

However, Sturm fails to mention that the German government has, for just this reason, moved away from feed-in tariffs, revising the Renewable Energy Sources Act to replace feed-in tariffs with an auction system that it has been developing over the past couple of years. Renewable energy facilities that currently get the feed-in tariff payments will continue to receive them for a fixed period of years, but new systems will function under the auction system, which was intentionally designed to slow down the rapid renewables deployment rate. Some German environmental groups have greeted this change with scathing criticism, furious at the loss of the feed-in tariffs for future renewable energy deployment. Sturm’s article does not even mention the change, which will not solve all of the problems that German utilities face, but at least has begun to respond to some of their concerns. It’s enough to make me feel some sympathy for government officials.

Sturm closes with the comment that poets and thinkers should not tinker with large-scale technological systems—an attitude that misses a key point. If a system is not sustainable or has growing externalities, then it needs to change. Moreover, all such systems do change over time, and historically those changes have always been undirected, messy, chaotic, and sometimes violent affairs. Large firms in the energy system, left to their own devices, have shown little interest in making any but the most incremental changes to what they do, which is understandable given the immense capital investments they have in the existing system. The energy transition away from fossil fuels will not be quick, easy, or cheap, and everyone involved will make mistakes along the way. But I have much respect for government policies that try to push the system in a beneficial direction and move it along faster than incumbent actors might like.

Varun Sivaram’s article on energy technology lock-in and Kartikeya Singh’s article on solar energy in India give us more examples of how complex and difficult the clean energy transition will be and yet how progress can happen. All three articles make two important general points.

The first point is the simple reality of path dependency. The effects of new policies or of technological or business innovations depend greatly on the contexts in which they operate. Their success or failure will depend on the specific circumstances of the country and even locality in which they operate, from existing technologies to supporting infrastructure, from existing businesses structures to attitudes toward paying for energy. Rigid universal models will likely fail in this complicated world.

The second point is that government efforts to change large-scale technological systems will always produce unintended consequences. It is impossible to predict everything that will happen when a policy hits the complexity of the real world. The German energy system has experienced three large policy or contextual changes in rapid succession: German reunification, which was followed by sweeping European Union regulations that forced all EU countries to liberalize their electricity systems, followed by the advent of the modern feed-in tariff in the 2000 renewable energy law. It would be remarkable if these changes had not produced unexpected problems.

The hallmark of good policy, as the political scientist Edward Woodhouse has argued for years, is not that it gets things right the first time, but instead that policymakers learn and adapt as complex systems react to policy changes in unexpected ways. All of the articles in this section strengthen that point and show us the kinds of analysis that can make policy better.

Frank N. Laird

Josef Korbel School of International Studies
University of Denver

In “Unlocking Clean Energy,” Varun Sivaram makes a number of important observations about innovation, deployment, and the scale-up process. His analysis highlights the value of several well-described ideas about “technological lock-in” that can appear in any field when efforts to introduce new technologies also—intentionally or not—provide a tremendous boost to the emerging “darling” technologies of any one era, thus inhibiting the next wave of innovations. Clear examples of this exist in military designs, fossil-fuel power plants, automobiles, and, as Sivaram notes with thoughtful examples, the clean energy space.

The question is not “is this dynamic real?”—it certainly is—but what to do about it given that we need to rapidly scale the national and global clean energy industry to not just provide a growing share of new demand, but to rapidly eat into the greenhouse-gas-emitting generation base. And “rapidly” means just that: even in nations that have begun the transition, the transformation must proceed at more than 5% per year, a huge feat that must be maintained through the mid-century “climate witching hour.”

A number of strategies can assist in this joint task of preventing lock-in and accelerating the change, and briefly I highlight two that expand Sivaram’s argument.

First, there is no better medicine than investing in, but also nurturing, “use-inspired” basic research and development (R&D). In work now almost two decades old, we found that underinvestment in energy research was not only chronic, but that, paradoxically, waves of publications and patents (two different, and admittedly imperfect, measures of innovation) often preceded new rounds of funding. This finding argues for an array of approaches including, among others, building unconventional collaborations (for example, solar and storage innovators working with computer science and behavioral social science researchers); granting not only prizes, but also market opportunities to novel technologies (sadly, often seen as the much-derided “picking winners”); and finding ways to build a more diverse and inclusive research community.

Second, the focus should be on where clean energy needs to be not in five or 10 years, but in 2050. The process of lock-in is not due just to the market advantage earned by the early entrant, but also because near-term goals obscure the vision of the objective. With a goal of 80% or more reduction in emissions by 2050, short-term transitions (for example, coal to gas without a strategy to then move rapidly off of gas, or conventional vehicles to hybrid vehicles instead of electric and hydrogen ones) can block subsequent ones. One valuable emerging tool is to develop and integrate energy (and water, and manufacturing decarbonization) planning models into R&D planning models.

In my laboratory, we have developed one such model of current and future power systems—SWITCH. The model can explore the cost and feasibility of generation, transmission, and storage options for the future electricity system. It identifies cost-effective investment decisions for meeting electricity demand, taking into account the existing grid as well as projections of future technological developments, renewable energy potential, fuel costs, and public policy. SWITCH uses time-synchronized load and renewable generation data to evaluate future capacity investments while ensuring that load is met and policy goals are reached at minimum cost. The model has been invaluable in working with researchers and governments around the world to understand decarbonization pathways where the mid-term objectives (for example, for 2020 and 2030) enable instead of hinder the long-term goals.

Models do not provide answers, but they clarify how investments in research and deployment can unintentionally prioritize near-term objectives over the true goal. Acting on those findings is the art of building incentives for innovation without inducing hesitation.

Daniel M. Kammen

Professor in the Energy and Resources Group, the Goldman School of Public Policy, and the Department of Nuclear Engineering
University of California
Science Envoy, US State Department

At this year’s World Economic Forum in Davos, many of us who work on low-carbon technologies came away brimming with excitement and optimism. Business and political leaders spoke of economic opportunity and job growth as benefits of combating climate change in solidarity with engineers and researchers, who have long advocated for clean energy. This Davos experience stands in contrast to some of the observations that Varun Sivaram makes in “Unlocking Clean Energy.”

While I fundamentally agree with Sivaram that investment in ideation and innovation of new technologies is critical for long-term decarbonization, leading technologies and the policy frameworks and investments that encourage these nascent incumbents need not pose a detriment to the next wave of innovation. In fact, I would argue that all of these critical pieces are needed to move the needle against climate change for two important reasons.

First, a leading technology signals progress and technological advancement to business and political leaders. The growing market of these technologies, such as solar photovoltaics, shows to investors that this is a vibrant sector worthy of attention.

Second, having a winner enables us to focus on development and deployment of existing technologies, whose mass adoptions are desperately needed now to fight global warming. From $4 per watt in 2008, silicon photovoltaic modules are $0.65 per watt today. This substantial cost reduction—and the wide-scale deployment that follows—is in large part due to concentrated development. In the absence of a market winner, this would not have happened.

Moreover, in free societies where innovation happens routinely with hard work, persistence, and oftentimes luck, successful innovators look beyond technology development. They effectively articulate new markets for their breakthroughs; that is, they creatively define how these emerging technologies are either solutions to yet unchallenged problems and unmet needs, or are disruptive and superior to incumbent technologies. Case in point is solar again. With silicon photovoltaic modules now commoditized for rooftop solar applications, start-up companies may be better off focusing on installation technologies to reduce balance-of-systems cost, the development of storage technologies to overcome the intermittency of solar, or the creation of building-integrated transparent solar cells to increase energy efficiency and occupant comfort. Head-on competition for rooftop installs is still possible, but understandably harder given ongoing societal and business benefits proffered by silicon photovoltaics.

My institution, the Andlinger Center for Energy and the Environment, is among others that are seeding a portfolio of future technologies to combat climate change and are collaborating with industry to bring them to market. Policies that intentionally or unintentionally discourage the adoption of newer and superior technologies must go by the wayside as countries race to fulfill the ambitious goals of the Paris Agreement on climate change. With the two-degree warming threshold looming, the question foremost on our minds should be how fast we can curb and neutralize emissions. Innovation in longer-term decarbonization technologies has to be part of the equation, but so do measures that enable immediate and large-scale deployment of incumbent technologies.

Yueh-Lin (Lynn) Loo

Director, Andlinger Center for Energy and the Environment
Theodora D. ’78 and William H. Walton III ’74 Professor in Engineering
Professor of Chemical and Biological Engineering
Princeton University

Varun Sivaram makes an important point about the risks of standardizing low-carbon energy systems on suboptimal platforms. As he argues, it is vital to link together innovation and deployment policy, so as to continually improve relatively immature technologies.

That said, there is some risk in his approach of making the best the enemy of the good. High-carbon energy systems are even more deeply locked-in than silicon solar photovoltaics or first-generation biofuels. Fossil fuels still power our civilization and, in doing so, sustain national governments, some of the biggest multinational companies, and many, many jobs. Any transformation of the energy system will need to reconfigure these institutions and interests so that enough of them gain enough from low-carbon energy innovation that they support innovation, rather than resist it.

That means that our thinking about breaking lock-in should extend beyond the relatively narrow technical approaches that Sivaram describes. The energy transition will involve building and sustaining new political constituencies and cultural norms as well as public and private R&D programs. Sometimes, this process may require compromises and “strange bedfellow” coalitions. As Sivaram points out, the technologies that dominate our current energy system, such as internal combustion engines, attained that status due to the political savvy and muscle of their champions as well as to their technically attractive features.

So while energy innovation policymakers should do all that they can to create protected niches for promising new technologies and bridges across technological generations to avoid lock-in, they may also need to live with lock-in if, at some point, less-than-perfect low-carbon energy technologies that are good enough from a climate protection point of view are able to capture hearts and minds as well as wallets.

David M. Hart

Professor, George Mason University
Senior Fellow, Information Technology and Innovation Foundation
Washington, DC

It is well known that the journey between laboratory research to full-scale deployment of any new energy technology can take 10 to 20 years. Advances between each stage in the innovation chain is often a near-death experience. At the fundamental research stage, graduate students move on to find jobs, and professors can lose grant funding or turn their interests in another direction. At the proof-of-concept stage, lack of know-how to build the integrated systems stymies even the passionate inventor. Getting resources for the scale-up needed to make the case to prospective investors requires more funding than is often available through traditional funding channels. Even technologies with lots of promise often can’t get the financial backing to compete with incumbent market participants. And shifting policies at state and national levels can change the playing field more quickly than the time needed to adjust to new business realities.

In fact, given all of the obstacles for good new ideas to swim upstream, it is all the more remarkable that we have as many new innovations as we do. We should applaud those that have succeeded. But as Varun Sivaram points out, in order to provide low-cost, reliable, and environmentally sustainable energy to everyone, we need more and faster innovation.

Historically, the energy innovation ecosystem has been highly fragmented: by stage in the innovation chain, by preference for one type of energy or another, by incumbent versus new market entrants, by institutional constraints or monopolies. The reasons for such fragmentation are too numerous to name.

To meet the global energy challenge, we need to fix the energy innovation ecosystem. Achieving this will require a dynamic network of relationships spanning science, technology, finance, markets, and the realm of policymaking. And efforts must include academics, entrepreneurs, innovators, the venture capital community, start-ups, large energy companies, policymakers, and most importantly the wellspring of talented young people entering the workforce every year. A thriving energy innovation would hone and vet the best ideas, draw new technologies out of the universities, get faculty and students working on the real problems that industry faces, assemble the capital needed at all stages of the innovation cycle, and help create the policy framework for a market pull for new technologies.

How do we cultivate this thriving energy innovation ecosystem? Most importantly, we need to get the energy industry back to the innovation table. Energy technology requires financial resources and scale-up know-how that exists only in the industry. Venture plays a critical role, too, in de-risking emerging technologies, and at the same time needs certainty that it will be rewarded for it. Universities and national laboratories are needed for spawning the next generation of science and technology innovation and educating the workforce with the knowledge and skills needed.

We propose a new approach to cultivating this thriving energy innovation ecosystem that will bring all of the right players to the table and align incentives for success. We can create topically focused consortia to bring to fruition promising new energy innovations. The consortia would support the portfolio of pre-commercial R&D activities needed to get these new technologies to market. The consortia can be cost-shared equally between industry, the government, and the venture community, and held accountable for the results. Policymakers and financial institutions need a seat at the table, too, to anticipate and pave the way for new market entrants. Those private-sector participants that take advantage of opportunities emerging from these consortia will be positioned to thrive in the rapidly evolving energy landscape.

This is not a new idea. Models such as this have worked before, in such areas as protecting national security and nurturing the growth of the semiconductor industry. Much progress has been made. But we need more energy innovation, faster, and with more certainty of success. This is an idea whose time has come.

Sally M. Benson

Arun Majumdar

Co-Directors, Precourt Institute for Energy
Stanford University

Kartikeya Singh’s article, “Of Sun Gods and Solar Energy,” presents a compelling narrative of solar energy in India. The author correctly identifies the need for more effective and coherent policy midwives that assist the birthing of solar across the country. The reader assumes that the purpose of the article is to highlight the challenges and opportunities surrounding solar energy, and in the same spirit we identify a few other key aspects that were either missed or deserved more attention.

First, the cost of capital for consumers and entrepreneurs continues to be too high as banks remain reluctant to finance small-scale projects that do not promise high returns. Training banks in solar technology and service, bundling small projects together to reduce liabilities, and having the government or other highly regarded institutions provide guarantees can potentially ease the restraints on lending and open up new avenues for finances.

Second, as Singh demonstrated in the article, there remains a disproportionate focus on lighting solutions as opposed to understanding how a whole host of other services can be integrated with solar, including heating, cooking, refrigeration, entertainment, and even mobility. The ultimate potential of solar is in enhancing the real incomes of energy-poor consumers, and that requires much greater customization. Concurrently, several case studies have highlighted the various socioeconomic benefits of off-grid renewable energy, but very few have explored the impacts of rising incomes and the corresponding increase in energy demand and how they may be catered to.

Third, the negative impact of centralized grid energy is grossly underestimated. A key factor that is expected to shape India’s economic growth story in the coming two decades is the perceived latent potential of its rural consumers, who account for more than 60% of the population. The current government policy on mini and micro grids in India tentatively aims to achieve an installed capacity of 500 megawatts by 2022 from renewable energy (largely solar), which in the current scheme of reaching 100 gigawatts of solar capacity is a pittance and suggests a highly disproportionate focus on utility-scale and urban rooftop solar projects. Singh rightly identifies the need for policy coherence in supporting the diffusion of renewables, but poor coordination across various government departments introduces further lag. Coordination between the department of agriculture, renewable energy, and labor alongside relevant state-level departments in aligning policy is useful and necessary for achieving rapid energy access.

Fourth, Singh identifies the importance of having strong local networks that can address the perennial lack of distrust that rural populations often harbor, as a result of policies and programs that over the years have over-promised and under-delivered. Indeed, such policies continue to view rural consumers as passive recipients of welfare as opposed to active consumers in a marketplace. Social entrepreneurs across India are gradually capturing this market, but unless policy creates more incentives, the off-grid solar market will remain forever a niche.

In sum, it appears that the sun gods will continue to shine long and bright in India, but while some bathe in it, many continue to remain in the shade.

Benjamin Sovacool

Chaitanya Kotikalapudi
University of Sussex, United Kingdom

Electric vehicle prospects

In “Electric Vehicles: Climate Saviors, or Not?” (Issues, Winter 2017), Jack Barkenbus presents a misleading assessment of the greenhouse gas impacts of electric vehicles (EVs). The article is mired in the present, but energy transitions are about the future. What’s important about EVs is their role in a future low-carbon energy system. But even the treatment of current EV emissions is flawed.

First, large-scale energy transitions take several decades. Second, any meaningful effort to mitigate greenhouse gas emissions must substantially decarbonize electricity generation. It takes decades for new vehicle systems to overcome the market’s aversion to risk, reduce costs via scale economies and learning by doing, create diversity of choice across vehicle types and manufacturers, build a ubiquitous recharging infrastructure, and replace the existing stock of vehicles. Technological advances are also needed and, so far, are ahead of schedule, according to assessments by the US Environmental Protection Agency.

The National Research Council 2013 report Transitions to Alternative Vehicles and Fuels concluded that a very aggressive combination of policies might achieve an 80% reduction in greenhouse gas emissions over 2005 levels for light-duty vehicles by 2050. In the most intensive scenario for battery electric vehicles (BEVs), plug-in electric vehicles achieved a market share of 10% by 2030 and 40% by 2050, by which time other policies could reduce grid emissions by 80% also. The most successful scenarios also included hydrogen fuel cell vehicles, but that’s another story.

Regrettably, even the article’s evaluation of current EV impacts is flawed. Although the carbon intensity of electricity delivered to a BEV is relevant, so-called “well-to-wheels” emissions per mile is a superior metric, because it includes everything from primary energy production to the vehicle’s energy efficiency. Argonne National Laboratory’s GREET model compares “like to like” vehicles across a wide range of fuels and propulsion systems. Its well-to-wheels numbers rate a 2015 BEV using US grid average electricity at 201 grams of carbon dioxide per mile (gCO2/mile), less than half the 409 gCO2/mile of a comparable gasoline vehicle. BEVs powered by California’s grid average a 70% reduction compared with a conventional gasoline vehicle and a 59% reduction relative to a hybrid vehicle.

But most EV owners can purchase “green power” from their local utility. (Green power programs are audited to ensure that the renewably generated electricity truly displaces nonrenewable generation.) An EV using renewable electricity emits 1 gCO2/mile, a 99.8% reduction.

With BEVs making up about 0.1% of vehicles in use today, current emissions by EVs are not entirely irrelevant. There are regional variations, so it makes sense to aim the strongest policies at regions with the cleanest grids and moderate temperatures, such as California, where roughly half of EVs are sold.

Today’s electric vehicles are cleaner than gasoline vehicles almost everywhere in the United States and the European Union. With green power, they can be 99.8% clean. And before EVs can become a large fraction of the vehicle stock, there is ample time to substantially decarbonize electricity generation. Every electric vehicle sold today is another small step toward a sustainable global energy system.

David L. Greene

Senior Fellow, Howard H. Baker Jr. Center for Public Policy
Research Professor, Department of Civil and Environmental Engineering
University of Tennessee, Knoxville

Jack Barkenbus’s article on electric vehicles (EVs) illustrates the importance of moving away from coal-fired generation of electricity. The electrification of transportation and the generation of electricity with wind and solar energy are both important. In a book that my colleagues and I coedited, Solar Powered Charging Infrastructure for Electric Vehicles: A Sustainable Development, we explore the concept of covering parking lots with solar panels to provide shaded parking and an infrastructure for charging EVs. The shaded parking has economic value for the batteries in the vehicles on hot summer days because high temperature can shorten battery life. The temperature is also cooler in a parked car when it is shaded, and this has social value.

The effort to improve urban air quality is one of the most important drivers of the transition to EVs and renewable electricity in numerous cities in California, as well as in London, Beijing, New Delhi, and many other cities in the United States and around the world. There are many urban communities where improving the quality of the air by reducing combustion processes is a very high priority. The emphasis is on the electrification of transportation and the addition of new wind and solar generating capacity to replace coal-fired power plants. Adding solar-powered charging stations in parking lots enables EVs to be charged while their drivers are at work or at an event. If 200 million parking spaces were covered with solar panels in the United States, approximately 25% of the electricity generated nationwide, based on 2014 levels, could be generated with solar energy.

The American Lung Association’s report State of the Air 2016 documents that progress is being made in Los Angeles and many other California cities where there is a significant effort to electrify transportation. The Los Angeles area achieved its lowest level ever for year-round particle pollution, based on data from 2012, 2013, and 2014. For ozone, Los Angeles had its lowest number of unhealthy days ever. But problems remain. According to the report, 12 of the 25 most polluted cities failed to meet national air quality standards for annual particle pollution. Roughly 166 million people in the United States live in counties that experience unhealthful levels of particle or ozone pollution, or both.

Many sources describe even greater levels of particle pollution in Beijing and New Delhi. China is moving forward with the electrification of transportation, but the effort is new and air quality is still very poor in Beijing, Tianjin, and several other large cities. From December 30, 2016, through early January 2017, Beijing experienced a stretch of extremely bad air pollution, according to a report in the January 23, 2017, issue of Chemical and Engineering News. Transitioning to EVs and solar-generated electricity in large cities would improve urban air quality in these locations and reduce greenhouse-gas emissions.

State of the Air 2016 points out that climate change has increased the challenges to protecting public health because of wildfires and drought that impact air quality. Deaths from asthma, chronic obstructive pulmonary disease, and cardiovascular disease occur when air quality is poor. According to the World Health Organization, there are about 6.5 million deaths each year because of air pollution.

We have the science and technology to reduce greenhouse-gas emissions and improve urban air quality by transitioning to EVs, solar-powered charging infrastructure in parking lots, and renewable energy to generate electricity. This transition has already started and there is significant progress in some parts of the world, such as in Norway. Electric buses are being purchased and put into service in many large cities. When the benefits of better air quality and reduced greenhouse-gas emissions are considered, decision makers should take action to move forward with programs and policies that enhance the rate of this great transition. Individuals can do their part by leasing or purchasing an electric vehicle or adding solar panels to their home, or both.

Larry E. Erickson

Professor of Chemical Engineering
Director, Center for Hazardous Substance Research
Kansas State University
Manhattan, Kansas

Putting technology to work

In “A Technology-Based Growth Policy” (Issues, Winter 2017), Gregory Tassey calls for the science and technology policy community to make an effort not only to understand the central role of technology in a global economy but also to help translate its understanding into policy prescriptions needed to leverage productivity growth. This call comes at an auspicious time. The US economy continues to struggle to attain a structural—and hence long-lasting—recovery from the Great Recession. And such a call is not new to the policy arena. So one might reasonably ask: Why does an emphasis on a growth policy to leverage productivity growth seem to fall on deaf ears?

Perhaps there are numerous explanations, but let me concentrate on only one. Technology is indeed the core driver of long-run productivity growth, as Tassey artfully points out. But if today’s technology-focused additional investment dollar will have an impact only in the long run, then that dollar might garner greater political capital if allocated toward more visible short-run projects. In addition to political expediency, a more fundamental problem might be the difficulty explaining to congressional constituents the merits of investments in long-run growth policies relative to investments in short-run stabilization efforts.

The logic behind the importance of US research and development (R&D) intensity returning to levels that rival those of some European and Asian countries is subtler than simply keeping up with our global competitors (Tassey’s Figure 3). To draw from W. Brian Arthur’s The Nature of Technology, the importance of continued investments in R&D rests on the fact that new technologies are combinations of previous ones. A nation must continually grow its technical knowledge base because, as those who are students of the technology revolutions that Tassey notes will attest, breakthrough technologies do not fall like manna from heaven. Rather, they have at their origin the accumulation of the knowledge base of prior technologies.

From whom is the nation to turn to enrich this knowledge base? Many policymakers have long known at least one answer to this question: small entrepreneurial firms. Policymakers need only remember President Jimmy Carter’s 1977 Presidential Domestic Policy Review System and his directive to Congress in 1979 in which he singled out the important role of small technology-based firms to our economy and thus to economic growth. There are volumes of academic research to support the growth-related role that small entrepreneurial firms play.

So, perhaps one possible step toward the type of R&D-based growth policy that Tassey is calling for should be increased efforts to stimulate R&D in small entrepreneurial firms that in turn will add to the evolution of a knowledge base on which subsequent technologies can be built. The limitation of venture capital in recent years to support the software, biotech, and services entrepreneurial start-ups, for example, is now inhibiting the ability of entrepreneurs in other “hard” technology fields, such as energy, from scaling up for production. This could be a fruitful area for future policy discussions.

Albert N. Link

Virginia Batte Phillips Distinguished Professor
University of North Carolina at Greensboro

Gregory Tassey makes a convincing argument that R&D investment in technology-based productivity growth is critical for long-term economic competitiveness. He points out, correctly, that the science and technology community often struggles to effectively make the case for the variety and scale of R&D investment required for the challenges of the global tech-based economy. In his article, Tassey makes an invaluable contribution to addressing this problem by identifying and characterizing key categories of technology-related economic assets prone to market failures. The article should, however, be taken as a “call to action” for the science and technology community. More work needs to be done to translate understanding of particular technology innovation processes and systems, economic spillovers, and risks beyond just “general anecdotes and descriptions” into a practical and holistic economic growth strategy—one with targeted policies and a coherent evidence base, which can address specific market failures.

One reason so many economists fail to appreciate the central role of technology in economic growth is that the sources of productivity improvement and market failure identified by Tassey occur within economic “black boxes.” An advanced theory of technology-based productivity growth, which can underpin more effective evidence gathering and policy development, will require opening up these boxes. Economists and policymakers will need to work with scientists, technologists, systems engineers, and operations management researchers, among others, to integrate more detailed understanding of the complex systems nature of technology-based products, advanced manufacturing systems, and global value chain networks.

Tassey’s arguments are not only important; they are becoming increasingly urgent. As competing economies build comparative advantage, acquiring new capabilities to innovate high-tech products and develop ever more advanced manufacturing systems, countries such as the United States can no longer rely on the strength of their science and engineering research base to drive competitive productivity growth. The old twentieth-century model whereby technological innovation is driven by a small number of countries (those with elite research universities and major R&D-intensive corporations that dominate supply chains) is rapidly disappearing. The pace of technological innovation and increasing competition means advanced economies no longer have comfortably long “windows of opportunity” to translate new knowledge from research into manufacturing, or for supply chains and skills portfolios to reconfigure around high-value economic opportunities associated with emerging technology-based products.

In this new era, a high-quality research base driving innovation, together with monetary and fiscal policies stimulating demand, may just not be sufficient to compete in the global tech-based economy. The capability to rapidly translate novel emerging technology R&D into manufacturing, and the ability to coordinate the complex manufacturing systems into which these technologies diffuse, may become the critical factors for enabling national economic value capture. Tassey’s argument for a technology-based growth policy—focused on coordinated investment in technology platforms, innovation infrastructure, institutions, and human capital—is convincing, timely, and urgent.

Eoin D. O’Sullivan

Babbage Fellow of Technology & Innovation Policy
Director, Centre for Science, Technology & Innovation Policy, Department of Engineering
University of Cambridge
Cambridge, England

Watch what you write

In “Journalism under Attack” (Issues, Winter 2017), Keith Kloor describes how issue advocates unwilling to concede basic facts worked to delegitimize him for reporting the truth and correcting the record. The parallels he draws to the current political discourse and attacks on the media ring all too true.

I could write a similar article about the importance of speaking truth to power—as well as the eternal tendency to shoot the messenger—from a scientist’s perspective. Perhaps in this, scientists and serious journalists such as Kloor have much in common.

As a scientist who has often presented scientific information in a policy setting, most often on issues related to marine resources, I find that evidence is sometimes not only inconvenient but unwelcome. In some cases, that results in scientists becoming a target in a way not unlike those that Kloor describes concerning his reporting on the efficacy of vaccines or the impacts of genetically modified crops.

On contentious issues, there is a well-worn tactic of ascribing deep, dark motives to scientists (and journalists), suggesting bias and manipulation of the facts. Those making the accusations, of course, assume an unwarranted veil of objectivity and independence.

A case in point is one of the many battles over climate change concerning an alleged slowdown in the rate of global warming since 1998. Clear scientific evidence based on several separate studies and using multiple datasets (for example, from the National Oceanic and Atmospheric Administration; the University of California, Berkeley; and the United Kingdom’s national weather service, called the Met Office) demonstrates that no such slowdown occurred and that warming has continued apace. But global warming conspiracy theorists, including Lamar Smith (R-TX), chairman of the Science, Space, and Technology Committee in the US House of Representatives, regularly “discover” new plots by scientists, believing that if they find that one smoking gun, then climate science will fall like a house of cards.

Consider the source. Chairman Smith and his cohorts are closely aligned with the major industries responsible, according to the evidence, for much of that warming. So the idea that he, or they, are objective while scientists measuring climate are somehow conflicted seems at best odd and, less charitably, absurd. But Chairman Smith continues to use his powerful position to push his “alternative facts” in public discourse.

As attacks become stronger and more unreasonable, retreating into a protected space is appealing. But the response cannot be to shy away from issues. Facts still matter, especially when they are vociferously denied. Both scientists and journalists should continue to investigate emerging issues and present evidence and the interpretation of that evidence, and speak up loudly for peers who are subject to unfair treatment. That their results are challenged or even denigrated makes the job even more important to a broader public. How else can the “court of public opinion” even function?

“For all our outward differences, we, in fact, all share the same proud title, the most important office in a democracy: citizen,” President Obama said in his farewell address. In other words, it is we who hold the real power in our country. So let scientists and journalists continue to speak truth to power.

Andrew A. Rosenberg

Director, Center for Science and Democracy
Union of Concerned Scientists
Washington, DC

The philosopher’s view

In “Philosopher’s Corner: The End of Puzzle Solving” (Issues, Winter 2017), Robert Frodeman issues a challenge to scientists: switch from puzzle solving to problem solving. Whereas puzzles are defined by a disciplinary matrix, problems are presented to us by the world outside of academe. By insisting that scientists not only solve their own puzzles, but also address the world’s problems, Frodeman asserts that “the autonomy of science has been chipped away, and its status as a uniquely objective view on the world is widely questioned.” If what we experienced after the end of the Cold War was a gradual erosion of the place of science in society, the recent elections in the United States and the rise of populism across Europe throw these changes into sharp relief. As Kevin Finneran suggests in the same issue (“Editor’s Journal: Take a Deep Breath”), now is a time for self-reflection.

Although Frodeman lists several topics for reflection (gender policies, CRISPR, and the nature of impact, among others), all of which present interesting ethical, legal, and societal issues, I think we have a larger problem that deserves our full attention: the current reward system in academe is designed to encourage puzzle solving rather than problem solving. Engagement with policy issues in science and technology is treated as an add-on to the “real” work of scholarly publishing, or even as an unnecessary distraction. (Teaching, of course, is treated as a necessary evil.) Unless we restructure the academic reward system to encourage, rather than to punish, problem solving, scientists (and, yes, philosophers) will continue polishing the brass on the Titanic. It is less that we need “a new skill,” as Frodeman suggests, and more that we need a new goal. The end (telos) of puzzle solving needs to be replaced; and if we are now to pursue a different goal, we need to restructure the academic reward system to reflect—and to encourage—the change.

Britt Holbrook

Assistant Professor, Department of Humanities
New Jersey Institute of Technology

Technocracy Chinese style

The topic that Liu Yongmou brought up in “The Benefits of Technocracy in China” (Issues, Fall 2016) concerns many Chinese intellectuals. He found, in scientific principle, certain similarities between technocracy and China’s current political system, and his argument that China’s political system is “limited technocracy” does enlighten. Nevertheless, it was not quite appropriate to use technocracy, a Western concept, to describe the role of technocrats in decision making under China’s actual conditions.

Currently, although China doesn’t have a Western electoral system like that of capitalistic countries, the Communist Party of China (CPC) has already formed a strict and effective mechanism to select and appoint officials to rule the country, the most significant feature of which is that it inherited Confucian classics of “exalting the virtuous and the capable.” Henri de Saint-Simon and Thorstein Veblen were criticized by Marx and Engels as “utopian socialists,” whose socialist views are closely related to technocracy, yet quite different from the CPC’s ruling system. On June 28, 2013, President Xi Jinping set five criteria for good officials in the new era: “faithful, serving the people, hard-working and pragmatic, responsible, and incorruptible.” Among them, the first is having faith in communism and Marxism and sticking to the Communist Party’s fundamental theories. The CPC and the government select high-level talents based more on their political minds and comprehensive skills, not favoring just those with scientific or engineering backgrounds. President Xi himself had an engineering background, but in order to build his comprehensive skills, he turned to humanities as a postgraduate.

Historically, the rise of technocrats with engineering backgrounds reflected that the CPC values knowledge and respects intellectuals. It is indispensable to China’s industrialization, and it resulted from a shortage of humanities in the history of Chinese education. During the Mao era (1949-76), the humanities in Chinese education were greatly affected by the ideology. Back then, in desperate need of developing heavy industries, China followed the Soviet Union’s lead of specialized education and enrolled mostly science and engineering majors. During 10 years of Cultural Revolution, senior intellectuals from the Republic of China era were suppressed as “reactionary academic authority.” As a result, the whole education system was paralyzed, triggering a severe shortage of talents. Those new technocrats with good political backgrounds accumulated political capital, knowledge capital, and cultural capital. In the post-Mao era after the Cultural Revolution, Deng Xiaoping said that science and technology constitute the primary productive force, highlighting the need to build a knowledgeable and young leadership team. “Red engineers” who had a good political background, had received higher education, and had been down to the grassroots for many years joined the Communist Party and soon got promoted. Names on this list include Jiang Zeming, Hu Jintao, and Wen Jiabao. Since the reform and opening up starting from 1978, humanities received renewed attention from the Chinese government and were resumed. The enrollment of students majoring in humanities skyrocketed. Many of the graduates entered politics. The incumbent premier, Li Keqiang, was an outstanding law student at Peking University from 1978 to 1982. Compared with technocrats with engineering technical knowledge, Li represents a new generation of leadership, or as the American political scientist Robert D. Putnam put it in 1977, “those with economic technical knowledge.”

Now, after more than 30 years of national science and technology system reform since 1985, the expert consulting system has become a prevalent practice in all kinds of government departments in China. A typical example is the National Office for Education Sciences Planning, made up of experts from different departments of the State Council. Also worth mentioning is the fact that the role of technocrats in social, political, and economic decision making is still influenced, to some extent, by ideology. Technocrats can play their fullest role only when their opinions are perfectly aligned with those of the CPC. Otherwise, their power will be weakened.

To conclude, although China’s current political decision-making system and Western technocracy share some similarities in terms of valuing experts and valuing knowledge, the two are fundamentally different. In recent years, the Chinese government has made innovation a basic strategy, attaching increasing importance to innovative talents in science and technology. With gradual social progress and the development of civil societies, we believe, technocrats will play a big role in every aspect of Chinese society.

Zhihui Zhang

Associate Professor, Institute for History of Natural Sciences
Chinese Academy of Sciences
Beijing

Are Moonshots Giant Leaps of Faith?

We need only compare our standards of living with those of a few generations ago, when vaccines or air travel were not widely accessible, to obtain a sense of how much the advancement of science and technology has been a boon for society. And when we realize how much of that advancement has been sponsored by the government, we develop an intuitive support for public funding of research. Although the history of federal research is primarily characterized by incremental increases, a few waves of enthusiasm have generated large surges of support for certain projects. A critical question is whether these large commitments of public resources have generated proportionally large societal benefits. Are we better off with them? Are they even necessary?

Presidents have thrown their support behind major science projects because of their promised society-wide benefits or their perceived political advantages. The most recent is President Obama’s initiative to end cancer, which echoes President Nixon’s first war on cancer in many respects. President Bush proposed a return to the moon, and President Clinton placed a winning bet on the development of nanotechnologies and a risky one on doubling the budget of the National Institutes of Health (NIH). Indeed, budget jumps are regularly proposed by policy entrepreneurs who advocate for a leap forward in scientific knowledge, the speedy development of a promising technology, or less frequently, the building of administrative capacity in a research agency to meet some future social challenge. Fitting all these three aims is the most emblematic of them all: the Apollo program. It is due to the remarkable success of this program, politically as well as technologically, that we refer to this sort of policy proposal as moonshots.

If we resist the temptation to assume that more is always better—and therefore that much more is much better—what do we really know about the effects of surges in research budgets? In other words, is every moonshot a giant leap forward for mankind? This simple and obvious question has received surprisingly little attention, and I offer below some thoughts and considerations for policy analysts and policymakers who may wish to tackle it.

Budget punctuations can be appraised at three levels: societal effects, knowledge production, and impact on the research bureaucracy. I propose some evaluative criteria for each of these categories and offer some preliminary policy recommendations.

Societal effects

Do moonshots pay off for society? An answer entails three things: one, a rigorous imagination of the universe without the moonshot (counterfactuals); two, a measure of the distance between that alternative world and ours for every key aspect that we can meaningfully connect to progress or betterment for society (outcome measures); and three, a clear understanding of how much publicly funded research contributes to those measured facets of progress (causal links). Answering our question is no small challenge because even if we could produce good counterfactuals, our current outcome measures are very limited in scope, and our best speculations of causal links are highly uncertain.

A significant effort in the scholarship of innovation has been devoted to developing good indicators of the broader impact of research—that is, good outcome measures. Most of that effort has focused on the impact on science itself—such as the number of publications and patents and their dissemination, and the generality of the findings—but some attention has been given to societal effects. Among indicators of social impact, economic ones have dominated the discussion for decades, even as recent research has shown the noneconomic value of scientific and technological advancements to be quite significant. Still, aggregate measures of income and employment continue to attract most of the attention from policymakers who authorize research and development (R&D) leaps. We should ask, then, if they are adequate to assess moonshots.

The contribution of scientific and technical knowledge to those economic aggregates is usually discerned by conceptualizing technical knowledge as a factor of production—such as labor, capital equipment, or land—and estimating its impact on the economy using a model of the production function, which is a stylized representation of economic activity. Alternatively, technological knowledge is conceptualized as something like managerial skill or leadership, a feature of production that enhances the productivity of all productive factors. That approach also requires a model of the economy and, like the knowledge-as-input approach, is highly sensitive to the way we design the model. Models are useful devices to explain complex systems, but the abstractions necessary for their construction makes them objects of constant scientific debate if not dispute. In other words, our current knowledge of the economic impact of new technical knowledge is highly contested and uncertain.

The ubiquitous indicator of job creation is equally problematic. Should we count the jobs directly created or jobs supporting or adjacent to the R&D project? What about the jobs created by the companies spun off from the project? Even if the total effect of research on job creation is traceable, the net effect would be far more difficult to estimate: how many destroyed jobs can be directly attributed to a single technology? The total net-new-jobs figure, if tenable, must then be adjusted by a factor of job quality. Old jobs and new jobs are not the same; there are wage differentials and changes in job security that must be accounted for. In the service sector, for instance, some technologies have replaced decently paid and stable clerical jobs, such as accountants or travel agents, with computers and armies of temps.

Figures such as the number of new drugs approved by the Food and Drug Administration are often proposed as alternative outcome measures to economic aggregates. But drug approvals are a mixed indicator of the social impact of research because the mere existence of new drugs does not necessarily advance the public interest. The drugs could be so expensive that they would be affordable to only a tiny fraction of the people who need them, or their high cost could create inflationary pressure in the whole health care system, pushing insurance premiums up and thus hurting those who can barely afford health insurance.

Spillover effects and externalities are also commonly suggested as economic outcomes of research. These are effects on actors beyond those directly involved in research. For instance, public funding of research increases the quantity and quality of national research, and this increase has the spillover effect of enhancing science education. The most important spillover of publicly funded research is when it is taken up by private-sector innovation. Spillovers are a true effect of research and by some accounts not an insignificant one—estimates range from 15% to 40% of the excess return to firms in the whole economy. However, spillovers are hard to measure at the project level because of how diffused they are across the innovation system. In other words, estimating the spillovers from Apollo, the doubling of the NIH budget, or either “war on cancer” could very quickly become an intractable problem.

Compounding the problem of incomplete measures of societal outcomes from moonshot research is how little we know about the societal impact of overall R&D spending. In other words, our theorized causal links are highly speculative. The two major schools in the economics of innovation have alternative approaches to causal explanation. The neoclassical tradition uses models of economic aggregates, and most estimates of the impact of R&D on the economy build from the work of Nobel laureate Robert Solow, who used a residual measure of output to approximate the effect of technological change. Solow’s residual is the contribution of factors not included in the model or, as economic historian Philip Mirowski puts it, a measure of our ignorance of what really drives economic growth. The evolutionary economic tradition, in turn, explains innovation as an adaptive process by individual firms and industries and therefore discriminates the return of R&D investments across different economic sectors. For example, forestry and computers are not likely to have the same return from R&D.

Economy-wide estimates of R&D returns are, by construction, not useful to assess returns at the level of projects or federal agencies, but can industry-specific estimates be useful? Perhaps the most cited estimates by industry are from a 1977 study by Edwin Mansfield and his colleagues, which found a median social return of 56% and a median private return of 25% in a relatively small sample of industries. Those estimates are highly sensitive to model assumptions—such as how fast the competition would have produced the innovation instead of imitating it, or whether the innovations improved or displaced existing products—and Mansfield himself warned that these “results should be treated with considerable caution.” It remains an open question whether this approach could be adapted to evaluate the economic impact of moonshots.

What’s more, even if we could arrive at consensus on a single method of estimating returns from R&D investments, as we have for quantifying economic growth, we could then ask the really difficult questions: Is that productivity different for public and private R&D? Is the yield constant in time or highly sensitive to volatile political and economic variables? Does the yield vary for different time horizons in which effects compound? And perhaps the most pressing question for budget leaps: How widely does the yield vary for each possible allocation of the federal R&D portfolio?

Still another challenge to understanding the societal impact of moonshots is to find the proper method for the construction of counterfactuals. Evolutionary economists note that economy-wide returns may be misleading, but are industry-specific estimates a good starting point to assess the economic impact of large R&D projects? Are these events so unique that each must be studied separately? The original moonshot was and still is the symbolic height of US technological prowess, and during the Cold War it was a major victory over the Soviet Union. Ironically, it was not a victory of the free market; rather, it was one of central planning and government sponsorship. But that is beside the point. It was a show of strength on the international stage as well as a needed victory in domestic politics at a time when social tensions had called into question the national character. The space program was not merely a symbolic victory; it promoted significant development in the defense industry that later found application in civilian technologies of widespread use. The production of a counterfactual in this case would be a daunting task for the historian’s imagination. But even leaving this black swan aside, we cannot assume that moonshots are all homogeneous in their effects. Their industry-specific impact is only one facet of their uniqueness; they are also politically and administratively unique. The federal agencies that sponsor them serve different missions and respond to different political dynamics. Moonshots, it appears, are unique historical events rather than a class of phenomena; consequently, individual case studies are more likely to yield plausible counterfactuals for each event.

Do moonshots pay off for science?

The question of whether budget leaps lead to scientific leaps is subsidiary to the larger question about the pace and direction of scientific advancement under government sponsorship.

The effect of government funding on the pace of scientific advancement is often imagined as a production question, where a proportionality of outputs to inputs is assumed. More precisely, the question becomes how much additional output is obtained for every additional tax dollar invested in research. Given that the coin of the realm is peer-reviewed publications, counts of published papers are the most common output measure. More sophisticated measures adjust publication quantities by some factor of quality, such as forward citations. The problem is that those productivity ratios are at best suggestive and at worst misleading of scientific advancement. Consider the problem of disciplinary fads. A paper that is hyped at publication could generate great citation excitement before it reveals itself to be a scientific cul-de-sac. A parallel problem resides in the political economy of publications: editors of high-impact-factor journals give preference to eye-catching research, thus inflating the premium for sexy topics and inadvertently skewing the allocation of talent away from more pedestrian but productive research programs. Measures of quality-adjusted quantities also suffer from serious methodological limitations, such as the problem of truncation. When the density of forward citations is a measure of quality, we must truncate the number of years considered after publication. Thus, two papers of equal quality that display different maturation timelines will appear to be of unequal quality.

An elite group of bibliometricians has adopted a rather humble position in this respect under the banner of the Leiden Manifesto. The group acknowledges the limitations of indicators of science productivity, particularly when they are used to guide science management decisions, and recommends the use of quantitative indicators only in combination with other forms of expert judgment, even if those are more subjective assessments of quality. This epistemic modesty is of course a sign of wisdom in the community of quantitative analysts, but it posits a moral hazard problem because the final judgment on the productivity of a field of research is reserved to the experts of that field themselves.

The solution is to anchor the measurement of the productivity of science outside the boundaries of science itself. The measuring rod must be some instrumental use of science. The proof of the pudding is in the eating, and we indeed eat, consume, and use technology. Technology is a good measure of the advancement of knowledge because technology encapsulates cause-effect relations that matter to people other than scientists. Technology is, of course, not the only instrumental use of science. Science cultivates the habit of rational thought among students. In addition, people defer to scientific authority (and the bureaucracy that implements it) on matters as crucial to their lives as basic sanitation, the safety of food or medicines, weather forecasting, and nutrition. Decision makers in the public and private sectors also defer to scientific expertise on matters of import such as the unemployment rate, the speed of epidemic outbreaks, or estimates of natural oil and gas reserves. Although the pedagogical and cultural value of science is no less important than its partnership with technology, changes in technology are easier to measure.

Patents are better indicators of instrumental uses of research than are publications. Therefore, patenting by scientific researchers should offer an adequate first approximation of the productivity of research. A few caveats are nevertheless in order. First, only a fraction of the universe of ready-for-use research is patentable; therefore, patenting activity should be considered the lower bound of any estimate of research productivity. Second, some portion of research-based patents are spurious. This could be due to universities’ eagerness to signal productivity to their political patrons or to researchers responding to their employer’s incentives for promotion, or even industrial patenting where firms take title to patents not for their commercial potential but as bargaining chips in litigation with their competition. Third, even when ready-for-use research is patentable, those findings may not be patented for lack of commercial interest; effective technologies may not be marketable because of the modest purchasing power of those who would demand them. To the extent that the noise of spurious patenting can be separated, patenting activity could be a useful signal of the productivity of the research spurred by a moonshot.

If the level of public support is not sustained for at least a decade, the large number of young researchers hired the year of the leap are likely to face a very tight labor market when they try to launch their careers.

The impact of research funding leaps is felt not only in the productivity of research but in the process of its production itself. The production of technical knowledge is labor intensive and requires well-functioning organizations to train, employ, and support that labor force. A significant part of the impact of budget leaps on research is precisely their effect on the organization of science. The training of scientists is a long process taking in principle four years of undergraduate work and five years of graduate school, but in practice the average time spent in graduate school to earn a PhD is more than seven years, and several more years of postdoctoral training is the norm before individuals are able to conduct independent research. A budget leap for research in universities and national laboratories translates into a sudden expansion of existing research groups that must hire more doctoral and postdoctoral students. If the level of public support is not sustained for at least a decade, the large number of young researchers hired the year of the leap are likely to face a very tight labor market when they try to launch their careers. This is precisely what occurred with the NIH doubling. Young researchers found themselves stuck in low-paying postdoctoral positions for many years, and the majority were not able to find tenure-track research positions.

Another hangover effect of budget leaps is the drop in the rate of grant approval and the overburdening of the peer-review system. The population of researchers grows with the leap in funding, and this larger group then has to compete for relatively flat or declining post-leap funding. In the aftermath of the NIH doubling, the success rate for grant applications fell from 30% to 12%. These effects on the economy of science pose a danger for the advancement of any discipline: a risk-averse population is likely to play it safe by keeping research proposals within conventional parameters, pursuing questions that are relatively easy to answer and avoiding the more ambitious questions where failure is more likely. Budget leaps could be self-defeating if in the long run they result in more research along well-trodden paths and less progress along new avenues of exploration.

Administrative capacity

The struggle for the annual budget increase is more than an instrumental ritual for federal departments and agencies. Budget gains also signal the relative political power of the governmental offices in any given political moment. But are large budget increases good for the health of research agencies?

One way to answer this question is by examining the long-term health of the agency budget itself. Stationary growth occurs when an agency performs no better on average than total discretionary spending. We can test if leaps forward in the budget help agencies perform better in the long run than they would by maintaining stationary growth. If a given budget jump places the agency on a different trend line from which it can continue growing at a stationary pace, that would mean a significant gain for the agency. But what happens if the leap exhausts the political capital of an agency, and its budget freezes following the leap? This has been the experience of NIH, where the budget has been virtually stagnant in the years following the doubling that occurred from 1998 to 2003.

An historical analysis of the budget time series that I conducted with my colleague Ben Clark revealed that federal R&D has advanced more by gradual increments than budget leaps, both at the total budget level and at the agency level. What is more, the budget jolts that have taken place peter out in time, and agencies return to their long-term stationary growth trend. Just as at NIH, this happened at the National Aeronautics and Space Administration (NASA) at the end of the Apollo program. The decline at NASA was even steeper, and it appears that its current budget is lower than it would have been with stationary growth. Nevertheless, it would be hard to argue in retrospect that the US government should not have undertaken the feat of placing a man on the moon for the reasons suggested earlier.

Budget requests for capacity building in federal agencies are the least likely to succeed and ironically are the most likely to yield large social impacts, because strengthening an agency’s technical ability to deliver on its mission also strengthens the hand of its political backers.

If the effects of a budget leap dissipate in time, agencies may still be justified in pursuing them if they can transform the short-term financial gains into long-term sustainability, both technically and politically. In effect, a cash injection could be used to restructure and reenergize the agency so that it secures future political favor in the most legitimate way: by acquiring better capabilities to manage new challenges and better serve its mission. The doubling of NIH allowed it to build the infrastructure to sustain research in the new frontier of biomedicine: genetics. Those investments helped NIH in no small measure to prepare for new resource-expensive missions such as precision medicine and the new cancer initiative.

Budget requests for capacity building in federal agencies are the least likely to succeed and ironically are the most likely to yield large social impacts, because strengthening an agency’s technical ability to deliver on its mission also strengthens the hand of its political backers. They are least likely to succeed because they do not have the appeal of exciting technological or scientific breakthroughs.

Likewise, a technological moonshot is likely to have a longer life and consequently greater impact if the technology in question is a platform on which several other technological applications can be built—what economists call a “general purpose technology.” Of course, it is hard to anticipate what will be the next microchip. The sort of diversification that hedges the political bet is attained by seeking a technological class or a cluster of technologies that have a multiplicity of applications and uses. This heterogeneity within a technological project has the additional benefit of attracting the participation of additional agencies that will then share the responsibility for delivering tangible results from the budget leap. A good example of this internal diversification strategy is the National Nanotechnology Initiative. Nano is so many things that it is more accurate to refer to it in the plural, and it is this plurality that enabled multiple federal agencies to incorporate it in their research portfolio without stepping on each other’s toes. Another example of internal diversification is the Obama administration’s Clean Energy Savings for All Americans Initiative; federal funding for research is not the central piece, but it covers a range of research programs and technology development efforts that extend beyond the scope of the Department of Energy.

A technological moonshot is likely to have a longer life and consequently greater impact if the technology in question is a platform on which several other technological applications can be built.

The question about the health of the bureaucracy seems cynical when we recall that the true motivation of public investments in research is to deliver public goods, such as achieving a technological feat or significantly expanding our knowledge of nature. But we should reject the apparent cynicism because these organizations deliver public goods without which life in contemporary societies would hardly be recognizable: a well-functioning bureaucracy is a public good itself. In an ideal world, the health of the public administration is aligned with the fulfillment of its mission; however, such an aspiration is not always realized, and the bureaucracy must perform a calculus of subsistence where its political health and sustainability must carry some weight.

Modest promises are better promises

I have argued that assessing the effects of large jumps in R&D funding faces serious hurdles. There is a lack of well-specified outcome measures of the societal impact of publicly funded research. There is also a dearth of well-established causal links that ascribe research to specific societal outcomes. Taking stock of our current knowledge, we find ourselves in possession of no more than an informed intuition that may be enough to mobilize political support for governmental R&D subsidies but is certainly not sufficient to support big bets on R&D projects. What can be derived from our current knowledge is no more than modest prescriptions or rules of thumb. For instance:

From the perspective of the health of scientific research itself, R&D leaps should be justified on measures of the instrumental value of science, such as technological achievements. Not only are the practical uses of science easy for taxpayers and legislators to recognize; they are also a legitimate justification of any burst in funding. Any evaluation of funding must speak to the various ways in which science enters into partnership with technology, not only patentable intellectual property and commercial successes, but the full array of means by which knowledge production meets people’s needs.

It seems clear, then, that we cannot easily extrapolate our justifications for the R&D subsidy to moonshots and consequently we cannot tell whether large bets in R&D really translate into social net-benefits. Until the history of major moonshots provides evidence rather than intuition of their success, policy entrepreneurs will be in the awkward position of advocating moonshots as leaps of faith.

Toward a More Diverse Research Community: Models of Success

Away from the spotlight on the recent presidential election and transition, the United States quietly reached a crossroads crucial to the nation’s future, yet hardly discussed by the national candidates. At this crossroads the nation must decide whether or not it will take the path leading to a science and technology talent pool well-developed in quality and quantity that draws from people throughout our population. It is only by tapping all of that talent pool that the country will succeed in realizing the economic, security, and health goals the American people prize.

If the nation’s policymakers and education leaders take the deliberate steps needed to expand the participation and success of underrepresented minorities in STEM based on what we know works, success is possible.

The United States has never been close to drawing fully on the nation’s science and technology talent. And unfortunately, that goal is now becoming harder to reach because the fastest-growing groups in our population are also the most underrepresented in science and technology. The only way to achieve that goal is to deliberately choose the path of inclusive excellence in science, technology, engineering, and mathematics (STEM) education in colleges and universities to provide the graduates needed by private, public, and nonprofit employers to sustain the economy and meet national goals.

If the nation’s policymakers and education leaders take the deliberate steps needed to expand the participation and success of underrepresented minorities in STEM based on what we know works, success is possible. Their actions should be guided by evidence of what works. Luckily, published reports—from the National Academies, the White House, and others—have already described the problem of underrepresentation in STEM and offered evidence-based findings and thoughtful recommendations for better utilizing US talent.

When the National Academies released its report in 2011 on expanding the participation and success of underrepresented minorities in STEM, it observed that although the needle had hardly moved on expanding underrepresented minority success in these fields at all levels, the nation had an excellent opportunity to succeed because it was already known what works and what needed to be done. The question was whether the nation had the will to do it.

Unfortunately, the needle has budged only slightly in the meantime. We believe that the nation does not need to invent anything new or innovative to address this critical problem. This can be understood if we look in detail at the evidence of where progress has been made by field and institution. This level of analysis identifies universities that are already succeeding in educating underrepresented minorities in these fields. We should build on and adapt this work.

Dimensions of the problem

The National Academies report, Expanding Underrepresented Minority Participation: America’s Science and Technology Talent at the Crossroads, identified significant underrepresentation of African Americans, Hispanics, and Native Americans in science and engineering (S&E); the situation has changed little in the six years since the report was issued. The percentage of the nation’s S&E workforce (academic and nonacademic) that is underrepresented minority increased from 9.1% to 12% between 2006 and 2013. That sounds like significant progress until one learns that the percentage of the nation’s population that is comprised of underrepresented minorities increased from 28.5% to 32.6% during that period. The participation of minorities in the S&E workforce is not keeping pace with the country’s changing demographics.

The report found that “Underrepresentation of this magnitude in the S&E workforce stems from the underproduction of minorities in S&E at every step of postsecondary education, with a progressive loss of representation as we proceed up the academic ladder.” Figure 1 shows how this was true in 2000 and was still true in 2012 (most recent year for available data), despite increases in underrepresented minority participation in postsecondary education and science and engineering degree awards at all levels. In 2012, underrepresented minorities comprised 34.6% of undergraduates, 18.9% of those earning S&E bachelor’s degrees, 13.7% of those awarded S&E master’s, and just 7.3% of doctoral awards in these fields.

The National Academies committee was not surprised to discover, given the findings above, that most underrepresented minority students left STEM majors before completing a college degree. According to analysis by the Higher Education Research Institute at the University of California, Los Angeles, just 18.4% of blacks, 22.1% of Latinos, and 18.8% of Native Americans who matriculated at four-year institutions seeking a bachelor’s degree in a STEM field earned one within five years. What is surprising is that most whites and Asian Americans were also not succeeding in STEM. Approximately 33% of white and 42% of Asian American STEM majors completed their bachelor’s degree in STEM within five years of matriculation.

Thus, this is not a problem for minorities only; it’s a national problem. Most well-prepared students of all backgrounds with an interest in STEM fields and careers abandon that goal in the first two years of college. To solve the problem, we in academia just need to look in the mirror.

Hrabowski_fig1

Recent developments

Expanding Underrepresented Minority Participation found that there existed “a cadre of qualified underrepresented minorities who already attend college, declared an interest in majoring in the natural sciences or engineering, and either did not complete a degree or switched out of STEM before graduating.” It recommended comprehensive support for underrepresented minority undergraduates in these fields—just as we also support K-12 academic preparation—and that financial support for these students “be provided through higher education institutions along with programs that simultaneously integrate academic, social, and professional development.”

The report also recommended that we—our institutions, programs, and faculty—redesign introductory courses in STEM to support the success of students rather than weed them out, and that we build community among our students to foster learning and persistence. Course redesign can take many forms, including problem-focused learning, group- or team-based learning, tutoring, peer support, and flipped classes in which the typical lecture and homework elements of a course are reversed. Building community among students is also central to success. It facilitates learning through group work in which peers can learn from one another, and it provides the social integration and cohesion—the sense of belonging—that promotes persistence and completion.

The report’s finding and recommendation regarding course redesign was strongly echoed a year later in 2012 when the President’s Council of Advisors in Science and Technology (PCAST) released Engage to Excel: Producing One Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics, which argued “for improving STEM education during the first two years of college … a crucial stage in the STEM education pathway.” The report insisted that institutions and their faculty redesign introductory courses in the sciences and mathematics that are required for majors in those fields as well as in engineering and medicine. And many instructors are now pioneering new approaches.

We are delighted by efforts at many universities to redesign courses. The Summer Institutes on Scientific Teaching, which emerged out of another National Academies report, Bio2010: Transforming Undergraduate Education for Future Research Biologists, released in 2003, has been working on this issue now for more than a decade. Bio2010 recommended that universities provide faculty with opportunities to refine classroom techniques and integrate new pedagogical approaches into their courses. The problem-focused, team-based, active learning it promotes engages students in a way that internalizes scientific knowledge, promotes understanding of scientific processes, and facilitates identification with the scientific profession.

Our faculty members at the University of Maryland, Baltimore County (UMBC), are illustrative of what professors are doing at the most forward-looking institutions. Anne Spence, a professor of practice in mechanical engineering, uses a flipped classroom model in her engineering mathematics course. Students watch short videos of two lectures (each 8-10 minutes long) prior to class and Spence then uses class time for problem-solving. She has reported that although some students prefer a more “passive” approach to learning, nine out of 10 eventually get on board with the problem-centered approach that requires them to be responsible for their own learning. Taryn Bayles, a professor of practice in chemical engineering, challenges junior majors in her class to teach fundamental concepts in heat and mass transfer to high school students. They learn that they must do more than just complete problems for homework; they have to explain concepts—taking learning to another level. Biology professor Jeff Leips attended the National Academies’ Summer Institutes in 2004 and subsequently redesigned his biology courses around team-based learning. Bill LaCourse, now dean of natural sciences and mathematics, also used team-based, problem-focused active learning to redesign introductory chemistry. He built a designated active learning space, the Chemistry Discovery Center, to facilitate this approach. It has decreased course-failure rates, enhanced learning in chemistry, and boosted the number of majors in the department.

Transforming Postsecondary Education in Mathematics, a national organization created to implement recommendations from the PCAST report, is leading an effort to change teaching and learning through new courses and pathways for success in undergraduate mathematics. This effort is focused particularly on improving the success of students in remedial mathematics classes, which are often a barrier to undergraduate success and completion, both in general and in STEM fields.

In addition, federal agencies have recently initiated efforts to address underrepresentation. The National Institutes of Health (NIH), through its Building Infrastructure Leading to Diversity (BUILD) initiative, begun in 2014, seeks to encourage “undergraduate institutions to implement and study innovative approaches to engaging and retaining students from diverse backgrounds in biomedical research, potentially helping them on the pathway to become future contributors to the NIH-funded research enterprise.” The National Science Foundation (NSF), through its Inclusion Across the Nation of Communities of Learners of Underrepresented Discoverers in Science and Engineering (INCLUDES) program, begun in 2016, seeks to “enhance US leadership in science and engineering discovery and innovation by seeking and effectively developing science, technology, engineering, and mathematics (STEM) talent from all sectors and groups in our society. By facilitating partnerships, communication and cooperation, NSF aims to build on and scale up what works in broadening participation programs to reach underserved populations nationwide.”

Hrabowski_table1

BUILD and INCLUDES add to existing NIH and NSF programs designed specifically to expand underrepresented minority success in STEM broadly or biomedical sciences more specifically. NSF programs include the Louis Stokes Alliances for Minority Participation and Alliances for Graduate Education and the Professoriate. NIH programs include Bridges to the Baccalaureate, Bridges to the Doctorate, Maximizing Access to Research Careers Undergraduate Student Training in Academic Research, Research Supplements for Diversity, and the Research Initiative for Scientific Enhancement. The Department of Education’s Math and Science Partnerships program does not exclusively focus on increasing diversity, but grant recipients can include this as a goal. These and other federal agencies also have programs that target historically black colleges and universities (HBCUs), Hispanic-serving institutions, and tribal colleges and universities.

Models worth imitating

A number of universities are blazing the trail by demonstrating which innovations produce success and deserve support from federal agencies, corporate foundations, and private philanthropies. An analysis of the colleges and universities that educate undergraduate African Americans who go on to earn PhDs in the natural sciences and engineering reveals a range of institutions that can support underrepresented minority success in STEM. As shown in Table 1, 13 of the top 20 baccalaureate-origin institutions are HBCUs, and seven are predominantly white institutions (PWIs).

Although these institutions deserve credit for showing that progress is possible, vast room for improvement remains. For example, the PWIs most active in research are graduating each year only four to six African Americans who eventually earn a PhD in the natural sciences or engineering. With a concerted effort along the lines described below, these universities could double or triple the numbers.

The critical role of HBCUs deserves special attention. According to the National Center for Education Statistics, there are 100 HBCUs in 19 states, the District of Columbia, and the Virgin Islands. They represent just 3% of postsecondary institutions, but they enroll 8% of black undergraduates. They award 15% of bachelor’s degrees earned by blacks, 19% of science and engineering bachelor’s degrees to blacks, and 35% of bachelor’s degrees to blacks who go on to earn PhDs in STEM.

Nevertheless, the overwhelming number of black undergraduates (92%) are enrolled in institutions that are not HBCUs, and we must not ignore those other institutions that can make a significant difference in educating blacks in the natural sciences and engineering if they focus attention on the task. Indeed, Table 2 displays the percentage of an institution’s black bachelor’s degree recipients who go on to earn PhDs in these fields. Here, PWIs top the list, though the numbers are still relatively small. The Massachusetts Institute of Technology is first with 8.1% of its black undergraduates eventually earning a PhD in natural sciences and engineering. The yield is impressive, though with just 50 black alumni over 10 years earning PhDs in these fields, this amounts to just five per year. UMBC is second with 4.4% of African Americans going on to earn PhDs in natural sciences and engineering, based on about nine graduates per year, relatively good but a number that can be increased.

In addition, PWIs award 88% of doctorates to blacks: 26% of blacks earn a bachelor’s degree at an HBCU and then earn a doctorate at a PWI, and 62% of blacks earn both the bachelor’s degree and PhD at a PWI. There is some variation by field in the percentage of PhDs awarded to blacks by PWIs: these institutions award 96% of PhDs in mathematics and computer science, 91% in engineering, 87% in physical sciences, 86% in biological and biomedical sciences, and 73% in agriculture.

Given that most African American students are enrolled in PWIs, replicating the success of the PWIs that have done well in educating African Americans in STEM would be a logical place to focus investment. UMBC is one of those institutions.

Proof of concept: UMBC

Thirty years ago, African American students were failing in science at the University of Maryland, Baltimore County. As we looked for ways to improve student success, we were fortunate that Baltimore philanthropist Robert Meyerhoff had a special concern about the plight of black males and took an interest in our work.

With support from the Meyerhoff Foundation, UMBC launched the Meyerhoff Scholars Program in 1989. Based on a holistic approach to educating black men, the program provides academic, social, and financial support to ensure that these students succeed in college and continue to doctoral programs. The program began with 19 black male college freshmen in its first year. The program has flourished and is now open to male and female students of all backgrounds. During the 2015-16 academic year, the program’s 270 students were 57% African American, 15% Caucasian, 15% Asian, 12% Hispanic, and 1% Native American.

The importance of including rigorous evaluation in any program or initiative cannot be overstated.

The key elements of the Meyerhoff program focus on values, financial support, academic success, social integration, and professional development. The core values of the program are high expectations for all students and aspiration for a research career by all students. Financial support provides students the opportunity to focus exclusively on their academic work. Research has shown that those students who work part-time (especially off campus) do not perform as well. A summer bridge program, study groups, peer mentoring, and faculty ownership of the program and student success critically support learning, persistence, completion, and acceptance into doctorate programs. Meyerhoff scholars reside in living-learning communities for their first two years, they engage in community service, and their family members are invited to campus events. The strategy is to deliberately form a sense of belonging and community that nurtures the students. Research has shown that individuals who do not persist are less likely to have a sense of belonging, and so the program actively promotes bonding among students in the Meyerhoff cohort, as well as with the larger “Meyerhoff family” and the UMBC community. The program also requires students to work in a research laboratory because experiential learning reinforces both knowledge of the science and identification as a scientist. Faculty advising supports the students as they plan their courses and their careers.

Hrabowski_table2

The importance of including rigorous evaluation in any program or initiative cannot be overstated. Far from being an add-on or afterthought, evaluation is necessary for both continual improvement and proof of concept. Data on program success are useful to program leadership in fundraising and to policymakers in considering best practice to be emulated.

The strategy is to deliberately form a sense of belonging and community that nurtures the students.

Our goal has been for Meyerhoff students to engage in research careers in the sciences, engineering, and medicine. There are now more than one thousand program alumni, 70% of whom are African American. Of this cohort, 350 are currently enrolled in graduate or professional schools, including 42 in MD and 41 in MD-PhD programs. Our graduates have earned 236 PhDs (including 45 MD-PhDs), 154 MDs, and 14 other professional degrees in health care. In addition, 271 have earned master’s degrees, mainly in engineering and computer science. Through our evaluation program, we also know that Meyerhoff alumni are five times more likely to graduate from or be a student in a STEM PhD or MD-PhD program than students who were accepted to the Meyerhoff program but chose to attend a different institution.

Some critics have argued that although the Meyerhoff program is successful at UMBC, its success could not be replicated elsewhere. They maintain that UMBC is a unique place with a president who is African American and a program champion in a way that cannot be copied. They also claim the program is expensive.

These criticisms are now being put to the test. Based on the notion that we should learn from institutions that have been successful, the Howard Hughes Medical Institute has invested about $8 million over five years, beginning in 2014, to adapt UMBC’s Meyerhoff Scholars Program through the Millennium Scholars Program at Pennsylvania State University and the Chancellor’s Scholars Program at the University of North Carolina at Chapel Hill.

We have learned that a campus that helps underrepresented minority students is also one that helps students in general.

These institutions’ leaders, neither of whom is a person of color, are deeply supportive of their respective programs and have invested their own institutional resources to support their development. As with UMBC, success will depend as well on the buy-in and deep involvement of faculty and staff beyond institutional leadership. As with the original Meyerhoff program at UMBC, this adaptation effort includes a rigorous evaluation program that serves to inform program development and validate proof of concept. So far, results look positive with respect to student academic success (as measured by grade point averages) and student retention in STEM majors.

Hrabowski_table3

Closer to home, we have learned that a campus that helps underrepresented minority students is also one that helps students in general. Now that we have shown success with high-achieving African American students, we are using the lessons learned to support the broader undergraduate population. Through a major grant from the National Institutes of Health BUILD initiative, we are now working to extend the successes of Meyerhoff to the broader undergraduate population in the life sciences at UMBC.

If we put our effort into it, we can expand the Meyerhoff program to new fields, and other institutions could also be successful with similar deliberate and focused efforts. We have already accomplished this in the physical sciences. At the time Expanding Underrepresented Minority Participation was published in 2011, UMBC was not in the top 10 among institutions in the number of African American undergraduates who completed PhDs in the physical sciences. Since then, through a close partnership with the National Security Agency, we have been able to support more students in the physical sciences, especially mathematics. Our increased success with African American students in these fields has recently moved our institution into the top 10. Other fields we are now targeting are the social sciences and medicine. There is room for improvement in these additional fields, and we plan to work on this in the future. For example, we are now outlining a plan for a Meyerhoff-like program to support the success of African Americans in economics.

The field of medicine is instructive. According to data from the American Association of Medical Colleges (see Table 3), UMBC ranks first in the nation in producing African American undergraduates who go on to earn the MD-PhD. However, when it comes to producing African Americans who go on to earn an MD degree, UMBC currently ranks 25th on the list of undergraduate schools for African Americans who go on to earn MD or MD-PhD degrees. In our efforts to produce more PhDs, it seems that we have actually discouraged students from going to medical school. Still, about 10 African American graduates of UMBC earn the MD or MD-PhD each year. We would need to produce just five more per year to be in the top 10. The fact is that with concerted effort and a deliberate approach we could double or triple these numbers—and that is the key point.

As a nation, we have shown that we can be successful if we focus on the work of increasing the participation and success of underrepresented groups in STEM and ensure that it is a priority at our institutions. For example, we have been successful in including greater numbers of women in the life sciences. But we have not been successful at maintaining women in significant numbers in computer science, where women have declined nationwide as a percentage of undergraduates. We had focus in the first instance; we took our eyes off the target in the second. And the results show it.

As a society, we face a particular challenge in the shortage of underrepresented minorities in STEM, particularly those who have earned PhDs and continue on to faculty careers in research. To address this, we must identify and learn from institutions that have been successful in educating African American undergraduates in the natural sciences and engineering through a focused effort.

We must also think critically about how to support these students when they complete their doctorates. Many arrive at that point in their careers without substantial guidance about next steps or a network to enable those steps. We need to develop a cadre of current faculty and researchers broadly—of all racial and ethnic backgrounds—who will serve as mentors and guides for these students who are assets critical to our nation’s future.

Freeman A. Hrabowski III is president of the University of Maryland, Baltimore County, and he chaired the President’s Commission on Educational Excellence for African Americans under the Obama administration. Peter H. Henderson is senior advisor to the president at UMBC. They served as study committee chair and study director, respectively, for the National Academies report Expanding Underrepresented Minority Participation: America’s Science and Technology Talent at the Crossroads.

Recommended reading

Bayer Corporation, STEM Education, Science Literacy, and the Innovation Workforce in America: 2012 Analysis and Insights from the Bayer Facts of Science Education Surveys (Pittsburgh, PA: Bayer Corporation, 2012).

National Academies of Sciences, Engineering, and Medicine, Expanding Underrepresented Minority Participation: America’s Science and Technology Talent at the Crossroads (Washington, DC: National Academies Press, 2011).

President’s Council of Advisors on Science and Technology, Engage to Excel: Producing One Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics (Washington, DC: Executive Office of the President, February 2012).

It’s Not a War on Science

Know your enemy, Sun Tzu reminds us in The Art of War. Science is in a war, but not the one many think. To avoid costly mistakes, scientists and those who support them need to know and understand the forces in the field. Those forces are not engaged in an attack on science—or the truth.

To be sure, it often seems as if they are. Researchers now expect federal science budgets to be cut significantly by Congress. Government scientists have seen their ability to speak in public curtailed. The president has issued executive orders rescinding recently promulgated rules supported by environmental scientists on climate change and water quality. Activists nationwide are busily downloading government data out of fear that government officials will remove access to it as they already have in a small number of cases. Surely, this is all the evidence one needs to conclude that there is a widespread attack on science under way in the United States—especially given the attackers’ admitted willingness to embrace #alternativefacts.

Counting the number of times that President Trump has lied may be good politics. So may labeling him antiscience. President Obama ran a powerful 2008 campaign on the slogan of restoring science to its rightful place in US society. His branding of George W. Bush as antiscience played on Bush’s history of fumbling decisions about research in areas such as climate change and embryonic stem cells. Obama’s tactics worked because a significant majority of the public believed in 2008 in the virtue and value of scientific research. Since most still do, similar tactics may work well again today.

But Sun Tzu’s axiom to know your enemy is a warning not to confuse political strategy with winning a war. Winning requires true understanding of your opponents, their resources and capabilities, and especially their motives and objectives.

What appears to be a war on science by the current Congress and president is, in fact, no such thing. Fundamentally, it is a war on government. To be more specific, it is a war on a form of government with which science has become deeply aligned and allied over the past century. To the disparate wings of the conservative movement that believe that US strength lies in its economic freedoms, its individual liberties, and its business enterprises, one truth binds them all: the federal government has become far too powerful.

Science is, for today’s conservatives, an instrument of federal power. They attack science’s forms of truth-making, its databases, and its budgets not out of a rejection of either science or truth, but as part of a coherent strategy to weaken the power of the federal agencies that rely on them. Put simply, they war on science to sap the legitimacy of the federal government. Mistaking this for a war on science could lead to bad tactics, bad strategy, and potentially disastrous outcomes for both science and democracy.

Conservative opposition to science-based government is rooted in American history. For most of the first 100 years of the United States, everything was small except US territory. Government was small, and so were businesses. Indeed, all organizations were small. The US state was tiny when measured by the budget and employment of the federal government today. Until the Civil War, for example, the federal government imposed minimal taxes and had almost no budget. The US Army was modest because we had few enemies.

Then came the railroads. Building the railroads required extensive capital and resources, and above all their construction and management required massive organization. In a few decades, the businesses that made steel and railroad cars, that laid rail lines and operated railroads, that provided the wood and coal to run them, and that financed all of this activity became gigantic, nation-spanning enterprises. Oil came next, fueling an enormous industrial boom. By the turn of the twentieth century, names such as Cornelius Vanderbilt, Andrew Carnegie, J. P. Morgan, Andrew Mellon, and John D. Rockefeller were well known in most households. To run their business empires, these industrialists built organizations that dwarfed anything ever seen before on the planet, inventing modern notions of economies of scale, organizational management, and business administration. It is not an accident that the first business schools were established at the University of Pennsylvania in 1881 and Dartmouth College in 1900 to train a new cadre of professionals in managing modern business behemoths.

To their critics, the nineteenth-century industrialists were the “robber barons” of a new age, profiteering to the detriment of many and creating profound and destructive new forms of inequality in a nation committed to its opposite. In the early 1900s, this inequality found its voice in two populist movements: the Progressives and the Conservationists. The Progressives sought to end what they perceived to be the economic monopolies created by the industrialists. The Conservationists sought to reduce what they perceived to be the waste and inefficiencies of corrupt natural resource exploitation. Both found a staunch ally in the Republican Theodore Roosevelt, who took over after President William McKinley’s assassination in 1901 and whose political actions as president from 1901 to 1909 helped to remake US government into a radically new form.

Roosevelt dramatically upgraded the power of the federal government. He created the Federal Bureau of Investigation to establish a professional national police force that would empower the Department of Justice to fight economic and political corruption, as well as the anarchist movement—the terrorists of the late nineteenth and early twentieth centuries. He signed the Food and Drug Act to regulate the use of chemicals in food and medicine, paving the way for the later creation of the Food and Drug Administration. He hired Gifford Pinchot, the famous Yale forester, to reorganize and upgrade the power of the US Forest Service to regulate the lumber industry in federal forests. He established the Reclamation Service and later upgraded it to the Bureau of Reclamation to manage the nation’s rivers. He reorganized federal land laws, significantly altering the ways that western lands were managed for settlement, grazing, mineral rights, and other uses. Roosevelt also radically upgraded the federal government’s knowledge agencies. He established the first permanent Bureau of the Census, in 1902. He established the Department of Commerce and Labor in 1903, giving it control over the Bureau of Statistics, to ensure that the federal government had the social and economic knowledge it needed to pursue its policy goals.

In all of this, Roosevelt drew heavily on two key concepts of governance. The first was the greatest good for the greatest number of people, and the second was the power of the expert to sort out just how to achieve that goal. Conservation meant ending the waste of scarce economic resources, such as water, wood, and land, to serve narrow private interests. Roosevelt sought to put resource extraction and use in the service of the nation by ending monopolies, expanding production, and significantly reducing prices, so as to boost national economic growth. Eliminating waste also meant ensuring that the government had the power to set and enforce standardized weights and measures, and in 1903 he established the National Bureau of Standards. To make all of these ideas work, his new agencies hired significant numbers of experts and lawyers to rewrite policies and put them on a sound legal and scientific foundation. Along the way, he significantly upgraded the Bureaus of Chemistry, Soils, Entomology, Fisheries, and Biological Survey.

When his carefully groomed successor, William Howard Taft, began undoing his policies, Roosevelt ran against him in 1912, splitting the Republican vote and returning a Democrat, Woodrow Wilson, to the White House. In 1913, Wilson completed Roosevelt’s Progressive legacy, overseeing the passage of the Sherman Anti-Trust Act, the establishment of the Federal Reserve Board to regulate the national economy, and the ratification of the Sixteenth Amendment allowing the creation of a national income tax. New powers to collect data were granted to diverse federal statistical agencies to provide key knowledge for the Federal Reserve Board, creating the impetus for the development over the subsequent three decades of measures of industrial activity, unemployment, and the national income and product accounts.

Over the course of the twentieth century, the prominence of experts in legitimizing federal government power have persisted and deepened. At the end of World War II, Vannevar Bush’s Science, The Endless Frontier helped to justify the shift of scientific research from private to public financing, at the same time accomplishing what was likely the largest overall increase in scientific funding in human history. The wartime successes of the Manhattan Project, radar, proximity fuses, and operations research created a new appreciation for the power of science to deliver valuable tools for national defense and, incidentally, for the growth of the defense industries. Similarly, the wartime development of penicillin gave rise—with a little help from Congress in the form of new intellectual property rules—to the modern, scientific pharmaceutical industry. With its new funds and newfound economic relevance, scientific research quickly shifted from a minor university backwater to a key driver of an enormous upgrading of higher education institutions.

The war also brought scientists into government service in much larger numbers than ever before. Rising conflict with the Soviet Union further exacerbated this trend. In 1957, when the Soviets launched Sputnik months before the United States was prepared to launch its own first satellite, the nation responded with the Defense Education Act, which funneled massive new funds into scientific and engineering education to train a new generation of scientists who could help keep the country ahead of its rivals. President Dwight Eisenhower also established the office of the president’s science advisor, endorsing the notion that independent scientists would “speak truth to power” and advise the federal government on the proper policies to pursue to safeguard the nation.

The idea that science advice could improve government grabbed the imagination of postwar policymakers. Within 15 years, the federal government had created so many new scientific advisory committees that Congress felt compelled to regulate them in the 1972 Federal Advisory Committee Act. Indeed, so powerful was this new policy apparatus that new regulatory agencies created in the 1970s were given the authority to act only on the say-so of science and saddled with complicated new scientific advisory bodies that oversaw their core activities. By 2000, the federal government had literally thousands of scientific advisory bodies working in virtually every policy arena.

Of the new regulatory agencies, the most prominent may well be the Environmental Protection Agency (EPA). Established in 1970 by President Richard Nixon under the National Environmental Protection Act (NEPA), EPA was required by its authorizing act to publish in the Federal Register the scientific justification for any new regulatory rule. At the same time, EPA was required to establish a Science Advisory Board to oversee its research and the application of that research in rule-making processes. Together, these two rules turned EPA into a magnet for scientific controversy. The fact that NEPA allowed EPA rules to be enforced or challenged by lawsuits meant that those controversies quickly spilled over into the courts. For over a decade, the Supreme Court supported a “hard look” doctrine that encouraged courts to further scrutinize agency science. The courts also insisted that other kinds of government acts, such as takings, be justified by science. Meanwhile, Congress continued to act, and each new major environmental law—the Clean Air Act, the Clean Water Act, the Toxic Substances Control Act, among others—more closely aligned EPA’s fate with science.

Given this history, it should hardly surprise us that the major environmental controversy of the past quarter-century has largely played out as a battle over science. Climate change is a phenomenon knowable only through science. Even prodigiously warm February months are but a statistical anomaly absent a scientific model of the Earth’s climate system through which to interpret them as signals of globally shifting weather patterns. Scientists put climate change squarely on the global diplomatic agenda in the late 1980s, arguing strenuously for policy attention to an issue that they had first highlighted for politicians in the 1950s and 1960s and calling for deep, planetwide regulation of several of the world’s oldest, richest, most powerful, and most important energy industries. In doing so, they drew heavily on the history of science advice to government and the organization of powerful government agencies to regulate the economy built up over a century of transformation of US government.

But even in 1990, as the United Nations was launching its first climate negotiations that would lead to the UN Framework Convention on Climate Change, the tides of US politics were already shifting. Coming into office a decade earlier, Ronald Reagan had gutted EPA, signaling the strength of a new conservative movement founded on concerns about the perceived excessive size and power of the federal government. Fueled by money from the carbon industries, conservatives rallied against federal regulation designed to slow climate change and what they perceived to be an even more insidious threat: the organization of a new and powerful form of political globalization that put the power of the US state in the service of a planetary ideology. Climate change became a cause for conservatives to fight at all costs. Meanwhile, scientists continued to become more convinced of its long-term catastrophic potential. The spiraling politics put the two groups ever more squarely on war footing.

Republican efforts today to dismantle Obamacare, the Environmental Protection Agency, and federal climate regulation are one and the same. As Steve Bannon, President Trump’s chief policy adviser, acknowledged at the 2017 Conservative Political Action Conference, the overarching goal is the “deconstruction of the administrative state.” In the early 1900s, numerous groups, from business lobbies to rural landowners to proponents of states’ rights, opposed Roosevelt’s and Wilson’s upgrading of the power of the federal government. They watched in horror as Franklin Roosevelt built the New Deal State out of the ashes of the Great Depression, as the federal government created Social Security and Medicare, as Lyndon Johnson passed his Great Society legislation, as Congress created new forms of social and environmental regulation in the 1970s, and as Democrats passed the Affordable Care Act in the depths of the Great Recession. Each new expansion of the federal government grew the terrain over which federal experts administer rules and regulations governing markets and private life.

It is not an accident that “experts” have become the enemy of those who feel left behind in the United States and Europe. The twentieth century’s most powerful forms of government have been built on the backs of experts. When that trend began, experts provided a powerful service for democratic publics, helping to create new government agencies that could balance the power of the massive new business organizations created by industrialization. Science and expertise created the appearance of taking issues out of the realms of politics and onto more neutral terrain. The recognition that this was largely illusion—and that politics remained central to the exercise of science-based government—took a while to register. Today is a different world. Authorized and powered by science, data, and expertise, the US federal government is now arguably the most powerful institution on the planet. Many on the left joined the right in feeling deeply uncomfortable with the massive new surveillance powers of the National Security Agency, authorized by Congress after the 2001 terrorist attacks and amplified by advances in technology.

Writing in The National Review after Bannon’s speech to conservatives, Jonah Goldberg observed, “The CIA is not the ‘deep state’—the FDA, OSHA, FCC, EPA, and countless other agencies are.” It’s a telling remark. Writing a decade ago from the other end of the political spectrum, I made the same observation. Scientific expertise provides legitimacy to governments to apply a strong hand in regulating our increasingly techno-scientific world. The only question is: How comfortable are we with that fact?

There is no war on science. For scientists, climate change has become a litmus test for belief in science. Responses to the skepticism expressed by the new director of EPA, Scott Pruitt, about climate change have called him ignorant, anti-science, and in the pockets of the oil sector. But for conservatives, the refusal to acknowledge climate change is a direct response to the success of scientists over the course of the twentieth century in putting science to work justifying the exercise of federal power. Under this model, acknowledging climate science would mean another significant upgrade in the power of government to regulate the economy not just in the United States but across the planet. For conservatives, the enemy is not science itself but the further expansion of powerful, centralized, science-informed government. For them it’s as much a crisis moment as it is for climate scientists: win now or lose the war for another century.

There’s only one catch. Conservatives have their eyes set on the power of science-based government as the problem. But for the past century, businesses have also tied their fortunes to science, creating massive techno-economic powerhouses and techno-human complexes that straddle the planet. Today’s science-based industries are no more intuitively allies of freedom and equality than their government counterparts. The logic of capital has become so tightly interwoven with technology that today’s businesses cannot openly acknowledge that their transformative agendas pose serious ethical, moral, or political risks. Tesla cannot admit the possibility that a rapid shift to driverless vehicles may not be a good idea any more than Google, Facebook, Intel, or Cisco can admit that the Internet has opened up individuals and countries to massive challenges of cybersecurity, surveillance, manipulation, and corruption any more than Exxon can admit that every day its activities are slowly, inexorably pushing the climate system over the brink.

Science is not some magic force for progress and democracy. It is a powerful agent of global social and environmental change. Our choices are stark and not entirely happy. We can continue to place the full burden of supporting social values on government, further centralizing power to regulate technology, industry, and society. Alternatively, we can reject the claims that modern technological enterprises are “too big to fail” and seek to dismantle them.

There is one other path. Much as we have sought over the past two decades to put sustainability at the heart of technology, business, and policy innovation, now is the time to do the same for social responsibility, and to redouble our efforts in support of both objectives. Science, business, and government have together made the modern world what it is. All three must step up to ensure that future societies are worth inhabiting—and they must do so in concert with global publics. None of the three can any longer pretend that they stand outside politics. Democracy depends on it. So does the future our children will inherit.

Seventeen Months on the Chemical Safety Board

On August 6, 2012, a pipe in the Chevron refinery in Richmond, CA, ruptured and leaked flammable fluid. The fluid partially vaporized into a cloud that engulfed 19 employees and then ignited. Miraculously, the workers narrowly escaped and were not seriously burned. The large plume of particulates and vapor travelled across Richmond, sending more than 15,000 people to the hospital with respiratory and eye irritation. Dozens of citizens also developed hearing problems as a result of the loud explosion.

Such an incident triggers a number of investigations. The Occupational Health and Safety Administration (OSHA) and the US Environmental Protection Agency (EPA) investigate to see if there are legal violations. The company itself, Chevron in this case, investigates. If there is a union, it will investigate or participate in the company’s and OSHA’s investigations. The US Chemical Safety and Hazard Investigation Board (CSB) can also investigate, if the CSB decides that the accident is momentous enough to represent a significant threat to public well-being, or if there are valuable lessons that the accident can offer for improving the safety of the chemical industry.

The CSB is a small federal agency that investigates incidents—mainly explosions and leaks—in the petrochemical industry. It has a budget of $11 million and about 50 employees. Based on the model of the National Transportation Safety Board, it has no regulatory authority; it makes recommendations, based on root cause analysis, to the party or parties that will be most effective in preventing similar accidents from recurring. Recommendations are usually made to companies, federal agencies such as EPA and OSHA, municipalities, local public health and environmental agencies, unions such as the United Steelworkers of America, or trade associations such as the American Chemistry Council. CSB was created as part of the 1990 Clean Air Act amendments.

An invitation to serve

In 2011, the chair of the CSB, a former professor of mine, asked me to apply for a job as a board member of this small agency. It seemed like a perfect fit for me. On family vacations when I was a child, we would occasionally tour factories. While my family was absorbed in the smell of melted chocolate at the Hershey factory, or the glow of a red-hot slab of copper from which pennies would be stamped at the US Mint, I was noticing the workers. By the time I was 10, I wanted to do something to address the injustice that some people had jobs that were soul-witheringly boring, dangerous, painful, or even fatal, and I eventually made it my life’s work.

With a doctorate in work environment policy and 30 years in the field of worker health, I was nominated to be a board member of the CSB by President Obama on September 20, 2011. The lengthy vetting process involved written recommendations from colleagues, as well as Federal Bureau of Investigation interviews with me, numerous friends, coworkers, and neighbors about my spending habits, drug and alcohol use, travel history, and character. I was finally confirmed by the Senate on January 1, 2013.

Board members serve for five years, and there are five slots, only three of which were filled during my tenure. We presidential appointees were supposed to direct policy and the priorities of the agency. We were to decide what events were to be investigated, and to work with CSB’s professional staff investigators to determine the focus of the investigations. The agency’s dozen investigators have a range of backgrounds in chemistry, public health, industrial engineering, chemical plant safety, human factors, refining operations, and law.

At the end of an investigation, the investigators provide a report of their findings to the board’s recommendations staff, which writes policy recommendations to the pertinent parties, based on the evidence revealed by the investigation. A public hearing is then held at which the investigators present the findings and recommendations to the affected community, and board members vote to approve or reject the report of the investigation along with the recommendations. During my tenure, senior leadership of the CSB consisted of a general counsel and a managing director. The managing director oversaw the investigations, but was also responsible for the board’s public relations, which included helping to produce training videos that are very popular with both labor and industry.

As a scientist and academic, I was unprepared for the politics of the job. I expected to be pressured by stakeholders outside the agency—from oil company executives and environmental activists—to approve or reject certain recommendations. That didn’t happen at all. Instead, the real problem turned out to be the internal politics of the CSB itself, and the related problem of lack of accountability to anyone outside the board. My time at the CSB raised some important and troubling questions about the role of science and democracy in ensuring occupational, public, and environmental health and safety in the petrochemical industry.

Soon after I arrived at the CSB, I and my fellow board members were presented with a draft report on the Richmond explosion. The draft revealed the agency’s dysfunction that came to dominate my tenure as a board member.

An explosive culture

CSB investigators learned that in the 10 years before the explosion, Chevron Richmond management was told at least six times, by their own employees, as well as by an outside group of Chevron technical experts, that the pipes that ultimately were the source of the leak should be inspected for corrosion. Sulfur, found in crude oil, reacts with iron-containing compounds, and at high temperatures (450-800 degrees F), the reaction will corrode steel pipes. This sulfidation corrosion had been documented in the industry literature for two decades. Alerts repeatedly went out to all Chevron plants from both the company headquarters and the industry to inspect the pipes. The glaring question that rose from our investigation was why did management at the Richmond plant not inspect and replace the vulnerable pipes?

The petrochemical industry in the United States is, to put it mildly, mature. Ninety-five percent of the 144 refineries in the United States were built before 1985. The average refinery is about 40 years old and some are almost 90. People who work at these plants sadly joke that they are held together with duct tape. Often, equipment is not maintained and instead is “run to failure.” In 2012, the CSB tracked 125 significant process safety incidents at US petroleum refineries.

Financial incentives are lacking to ensure that workplaces, including refineries, are safe and well-maintained. Even for workplace fatalities caused by willful negligence, the fines levied by federal OSHA are low; in 2014, the median penalty in fatality cases was $5,050. Criminal penalties are weak, too; a willful violation resulting in death is prosecuted as a misdemeanor. Killing workers is cheap.

Financial incentives are lacking to ensure that workplaces, including refineries, are safe and well-maintained.

Injuring them is more expensive, because injured workers have medical bills, but companies bear little of the cost. The workers’ compensation system is supposed to pay for medical care and some salary replacement, but the system is deeply flawed, and a minority of injured workers bothers to use it. In the 1920s, before workers’ compensation came into being, jury awards to injured workers were wildly unpredictable and sometimes very high. The workers’ compensation system provided employers with predictable expenses in the form of premiums. Workers gave up the right to sue, and lost any compensation for pain and suffering (which was part of the award in pre-compensation days) in exchange for a “no fault system” that would pay for salary replacement and medical costs.

Compensation payments have not kept up with the cost of living over the past 40 years, and recent changes in state compensation systems have made it more difficult for workers to obtain coverage for medical expenses that are rightfully theirs. And now some states are opting out of the formally required comp system, so employers are no longer even paying premiums. A recent OSHA report noted: “Employers now provide only a small percentage (about 20%) of the overall financial cost of workplace injuries and illnesses through workers’ compensation. This cost-shift has forced injured workers, their families, and taxpayers to subsidize the vast majority of the lost income and medical care costs…” The financial incentives for companies to maintain a safe workplace are minimal.

Nor do companies seem to be very concerned about reputational damage, or the loss of refining capacity that occurs when a plant is out of service due to an accident. My conversations with insurance specialists revealed that most companies have business interruption insurance or are self-insured. Not only does the insurance adequately protect against such losses, it also covers the cost of replacing the equipment—another reason to run to failure. Moreover, the company as a whole may benefit after an accident, if the lost refining capacity leads to increased gas prices, as it did on the West Coast after the Richmond explosion. Overall, the incentives line up against repairing or replacing aging equipment.

The human element is important for plant safety as well. In particular, the tenure of plant managers varies by company and may influence maintenance practices. Some companies move the managers every two or three years, while others move them every five years or more. In my discussions with environmentalists and industry people alike, I was told that plant managers are incentivized to not spend money on maintenance, and the managers who know they will soon be moving on are therefore not inclined to do repairs, gambling that the accident will happen on the next guy’s watch. If a manager is there for seven years, he is more likely to do the repairs because he doesn’t want an accident to happen while he is at the helm. There is no evil intent here—it’s a matter of perverse incentives. A research study on the effect of the tenure of plant managers on maintenance practices could be very enlightening.

This is the social context of the industry that the CSB advises and in which the Chevron incident occurred. Because so many people were affected by the explosion, the CSB held a public hearing in Richmond, CA, in April 2013 to present the board’s initial findings to the community. CSB investigators presented excellent information about the ignored warnings of corroding pipes. All three board members agreed that the incident could be explained in terms of poor management decision making. What was going on in Chevron’s senior management that allowed them to ignore the literature and the warnings of its own employees? Congressman George Miller (D-CA), in remarks at this meeting in his district, echoed our questions about the organizational culture of Chevron. Some board members also raised questions about the role of California’s state-level OSHA, whose severe personnel and resource shortages may have affected its ability to sufficiently inspect plants in the state. We also wondered how and why Contra Costa County’s much-lauded Industrial Safety Ordinance, which requires chemical plants and refineries to submit a risk management plan to the US EPA and the County Health Service, missed this problem.

The Board Members are supposed to have some influence on the focus of investigations, so we three board members expected some investigation of management decision making and organizational culture at Chevron. In April 2013, the CSB staff presented the first Chevron Interim Investigation Report. Rather than assessing why known corroding pipes were not inspected and, if needed, replaced, the report provided a thorough technical analysis of sulfidation corrosion and why the pipe broke. Our concerns as board members about organizational management issues were simply ignored in the investigation.

The “safety case”

The way you define the problem shapes how you solve it. If you define the problem as sulfidation corrosion, then you end up recommending adding chrome to the pipes to make them more corrosion resistant. If you define it as a Chevron management problem, it is messier, more controversial, and requires more thought about how you’re going to fix it. CSB spent many thousands of dollars on technical experts for all sorts of investigations, but I could not get any momentum to hire an organizational management consultant to help with the Chevron investigation. To me it seemed clear that protecting the industry, workers, and the public alike demanded an investigation into management as well as technical failures. And indeed, some Chevron managers who were not from the Richmond plant later told me they believed that Chevron management had “dropped the ball.”

But as a recent member of the board, I failed to appreciate that the board chair had abdicated power over the investigations to the managing director, who tightly controlled the staff. Although I repeatedly brought up the need for an organizational aspect to the investigation, the staff, directed by the managing director, was trained to ignore the will of the board. Indeed, that was the pattern of my tenure at CSB.

The second CSB report on the Chevron accident came out in January of 2014. It again essentially ignored the issue of management failure, although it did recommend a major change in regulatory regime: the adoption of what is known as the “safety case” regime in California refineries. The safety case is a regulatory regime in which the company presents information (a case) to an informed regulator, affirming that the company can operate safely. The government then gives the company permission to operate if the government deems that the company has sufficient safeguards in place. This approach is used in the United Kingdom, Norway, and Australia.

Despite the fact that CSB has no regulatory power, its authority as a federal agency with a presidentially appointed board means that this recommendation for regulatory regime change for California refineries would have had significant weight. And given the deficiencies of the current regulatory regime, I understood the impetus to recommend a new approach. Nonetheless, I was reluctant, as was one of the other two board members, to support this recommendation. The recommendations staff was also uncomfortable with this approach because it did not flow from the evidence; instead, they said, it was being foisted on the staff by a senior staffer in the agency.

My reservations about the safety case reflect its complete dependence on a well-funded, well-informed regulator to approve or reject the industry’s case. The number of OSHA inspectors in the nation is dismally low: 2,200 inspectors for 130 million workers in 8 million worksites, or one inspector per 59,000 workers. There is no reason to think that this will change radically anytime soon. Labor is relied on to provide a check and balance to industry’s proposed case, which is a strategy that may work in countries where labor, industry, and government all have somewhat equal power, but that is not the case in the United States. I feared that a safety case regime would simply duplicate the same power relations of weak labor, weak government, and strong industry that underlie the current failed regulatory regime. Additionally, the safety case regulatory regime lacks transparency. Safety cases are seen by regulators but they are not public. Although we may disagree with the levels of air pollutants that a given plant is permitted to emit, the permits under the Clean Air and Clean Water acts, as well as federal environmental statutes, are public documents. By contrast, communities are shut out of the safety case process.

The CSB presented the second regulatory report on the Chevron accident for a vote at the public hearing in Richmond in January 2014. It provided a literature review of the proposed safety case regulatory regime, but it did not include any of the downsides. It was an advocacy piece. Two of the three board members wanted to settle some of the questions about the safety case before we voted to approve the report, and so instead we voted for a postponement, and offered criteria that we wanted to be met before we approved the report. The meeting was contentious, and many in the city were understandably desperate for something to rein in the company that dominated their city. We advocated an immediate remedy of increasing resources for California state OSHA inspectors—which eventually did indeed happen.

After the meeting, to answer our questions, we proposed gathering an expert panel. This was dismissed by senior leadership as too complicated, because it would require compliance with the Federal Advisory Committee Act, which mandates that such meetings be open to the public. We proposed a National Academy of Sciences investigation of the safety case, which was dismissed by senior leadership as too expensive. We wanted to hold a conference, to hear all pros and cons of how a safety case regime would work in the United States. That did not happen because neither the chair nor senior staff leadership wanted it. Once more, the will of the majority of the board—two of its three members—was ignored.

Instead, the other board member and I were vilified in an email from the chair to the whole agency for agreeing with industry in opposing the safety case recommendation. Although the Richmond local of the United Steelworkers union did support the safety case, the chair’s email did not bother to mention that the health and safety leadership of the national union opposed it. There was never an open debate about the safety case, either within or outside the CSB, despite the fact that a majority of the board members had serious reservations about it. Would the proposed regime change improve a bad situation or make it worse? We had no idea, and there was no discussion, or even a mention, of the downsides of safety case regimes in the still-unapproved January 2014 report.

The whole safety case debate at the CSB raised questions about how recommendations were formed. The recommendations staff was justifiably adamant that recommendations must flow from the evidence; staff members were uncomfortable, and powerless, when recommendations from investigators, or senior staff leadership, were inserted into reports without their endorsement. In the end, CSB’s investigation of the Chevron case was undermined by board governance that allowed senior staff and the board chair to impose their will on other board members and staff, skewing agency process aimed at serving the public, workers, and industry.

A better board

During my tenure as a board member, the CSB’s agenda seemed to be captured by its professional staff, whose members’ interests largely lay in preserving their power and positions, not in ensuring that the board served the public good. Indeed, many at the CSB thought their jobs were primarily about doing public relations for the board, because good PR affects the budget and the budget is power. CSB’s senior management was savvy about the ways of government, whereas the board members—presidential appointees with time-limited terms, who come in from industry, nongovernmental agencies, labor, or academia—often are not. The well-intentioned but easy-to-manipulate board chair was apparently convinced by senior staff that his job was to serve the interests of the agency, which means protecting its image so that it continues to get funded.

The consolidation of power by senior management through alliance with the chair began before my arrival, and it continued throughout my tenure. Because the board senior staff members served the chair, not the whole board, they were able to manipulate the chair to prevent other board members from bringing up votes at public meetings. The board was disempowered and marginalized to such an extent that when I requested additional information from staff, this was framed as “disrespecting the staff.” This framing was used by senior staff to vilify board members other than the chair. We were treated as temporary pests and portrayed to other staff as enemies of the agency. Some staff members claimed we were denying the chair “his legacy” by not supporting regulatory regime change.

The job of a presidential appointee is to serve the public, not the agency or one’s ego, and that means being accountable to the public. One obvious way to be accountable to the public is to have public business meetings, where investigations that are under way can be discussed before reports are drafted and voted on. Many of the staff members and the majority of the board believe that safe workplaces require an organizational culture where constructive criticism is welcomed rather than viewed as an attack. Yet we lacked this culture in our own workplace. Despite efforts by the majority of the board, there were no public business meetings because they were seen as “airing dirty laundry.”

A second way to be accountable to the public is to release draft reports of investigations and recommendations with enough lead time before public meetings—at least a month—to allow for meaningful public comments and for those comments to be considered and incorporated into the final report. Even better would be interim public meetings to discuss the findings and the recommendations of draft reports with the public and stakeholders, where no voting would occur. The vote to approve the report could then occur at a subsequent public meeting, after comments had been considered and debated. Yet when I was on the board, reports were presented and voted on at the same meeting. The public comments at these meetings could have no impact on either the report or the recommendations; they merely provided the illusion of public input.

A third way to be accountable to the public is to make the evidence of the investigations public as the report is being written. The National Transportation Safety Board, for example, has a public docket that contains all the evidence of an investigation, to be transparent and to ensure that recommendations flow from the evidence.

CSB’s apparent capture by senior staff penetrated and poisoned the entire agency and reduced its effectiveness. A backlog of investigations, with no plan for how to clear it up, led to the feeling that although everyone was working hard, things were not getting done. According to the annual federal survey of 2014, employee morale was at an all-time low, as was faith in senior staff leadership. The atmosphere was one of intimidation and distrust.

I was told that in the past there had been an open and free exchange of ideas among everyone in this small agency, but this had now changed. The year before my arrival, in an email, the staff had been instructed by the managing director to not communicate with board members without his knowledge “as a courtesy,” but the real message was clear to everyone. Fear of reprisals was rampant. Employees were leaving the agency to work at other federal jobs. One staff member was denied an annual bonus for making a policy inquiry to another federal agency without “permission.” People disagreeing with the chair or the managing director said they “had targets on their backs.” Office doors were mostly closed. Staff was so nervous about talking to board members that I had meetings off-site or in the ladies room. Yes, a presidential appointee was conducting business in the ladies room.

I resigned after 17 months, at the end of May 2014, because I wasn’t able to accomplish anything useful. The problems—agency mismanagement, low employee morale, intimidation of the staff, the toxic work environment, and marginalization of board members by the chair and senior management—had been brewing for three years before I got to the agency. Complaints to Congress and the EPA’s Inspector General (which oversees CSB) from the CSB staff and board members, former and current, finally culminated the next month in a very dramatic hearing of the House Committee on Oversight and Government Reform entitled “Whistleblower reprisal and management failures at the Chemical Safety Board,” at which I testified.

A second hearing of the same committee, in March 2015, resulted in a bipartisan call for the chair’s resignation. That’s right: amid all the partisan rancor in Congress, the mismanagement and abuse of power at CSB was enough to foster a bipartisan moment. The White House acted, and in April 2015, two months before his term expired, the chair resigned. Senior staff members were subsequently put on administrative leave, and the managing director was soon recommended for dismissal. My former colleagues report that the atmosphere is no longer intimidating, but relaxed and open. A new chair and new board members have put the agency on an even keel. Yet key positions—the general counsel and head of recommendations—remain vacant. The status of the managing director has not been settled. As of May 2016, he is still listed on the CSB website as the managing director, although he has not been in the office since June 2015.

In its third and final report on the Chevron accident, released in January 2015, the CSB did examine some of the organizational problems at the Richmond plant, and provided useful diagrams about the flow of information in the company. Still, it never got to the crux of the issue: how decisions were made; what happened when technical experts disagreed with managers; why the many recommendations whose implementation could have prevented the accident were ignored; who was responsible for ignoring them; and what corporate incentives were in place to allow all the warnings to be dismissed? The organizational management study that should have been done transparently and openly by the CSB was done instead by Chevron, behind closed doors. Unless Chevron publicizes its findings, neither any other company nor the public will benefit from its research.

The 2014 report advocating the safety case was never approved by the board, but as of May 2016 it remained on the CSB website.

My experience raises the more general question of how the government can improve the potential for scientists to contribute effectively when they are appointed to agency boards aimed at protecting the health and safety of the public. Before coming to Washington to work on the CSB, I had been warned that people in government often forget they are serving the public and not their agencies. But when I arrived at the CSB, I had no instruction, guidance, or orientation at all. New board members need to be trained about their rights and responsibilities, as well as about how an agency governed by a board with a chair—as opposed to an agency run by a single presidentially appointed director (such as the EPA)—operates.

Congress, in its wisdom, deemed that CSB should be guided by a board; much of the conflict during my tenure stemmed from disagreements about power distribution among the chair and the board members. Board members, including the chair, must be trained that the business of the agency should be done in a transparent and collegial manner. Perhaps the training could be done by the National Academy of Public Administration. It certainly should not be done by anyone inside the agency at which the new board members will serve. Another useful reform might be to adopt the NSTB’s practice of having two-year terms for its chair, extendable by the President, who typically seeks the consent of all the other board members. Such a policy would have helped the CSB enormously. A third critical need is to explicitly specify the process for adjudicating differing expert opinions among board members. Better mechanisms of public accountability, as I discussed above, should help prevent abuse of power by staff and particular board members, but a culture of respectful deliberation among board members is necessary, too, followed by public votes that make clear where members stand on divisive issues, and why. Such changes will flow from effective board leadership.

The problem of how to guarantee good leadership, however, remains. Scientists serving on government boards must remember that the agenda of politically savvy staff does not always mesh with the public mandate of the agency. Good leadership requires wisdom, integrity, good judgment, and a sense of fairness and focus. The ability to listen to opposing views openly without defensiveness and the ability to discern truth from manipulative lies are crucial for good leadership. Occupational physician Tee Guidotti points out that leaders who are worried about leaving their mark on an agency are driven by power and ego, not service. Good leaders must be comfortable with power, but they shouldn’t need it. A White House and congressional vetting process that focuses as much on character and leadership skills as it does on resumes would benefit everyone with an interest in chemical safety: the agency, the industry, workers, and the public.

Correction

The originally-published version of this article contained several errors: the hearing said to have taken place in October 2014 actually took place in January 2014; and the hearing said to have taken place in February 2013 actually took place in April 2013 (there was no February 2013 public meeting). A complete transcript of the April 2013 meeting is available on the Chemical Safety Board’s website. Also, the article misstates how National Transportation Safety Board (NTSB) leaders are selected. These errors have been corrected in the online version of the article. In addition, one of the editors of Issues, Dr. Daniel Sarewitz, is the brother-in-law of the author of the article. The article meets the standards for publication in Issues.

You’re a Mile Away and You Have Their Shoes

They wouldn’t tell me my name when I woke up, because they said that would corrupt the experiment. Instead, they told me my neutral reference: Miss Scarlet. Then they sat me down in a room with a bright ceiling light, a single table, two chairs, and a video camera.

“Do you remember your name?” said the questioner.

“No,” I said.

“Do you remember where you were born?”

“No.”

“Do you remember your birthday?”

“I don’t remember anything.”

“Please answer the question with a yes or no.”

“No, I don’t remember my birthday.”

The questions continued for some time, a litany of memory loss, until the questioner put down her clipboard on the table and smiled.

“Well, Miss Scarlet, now that that’s out of the way and we have a baseline, I’d like to welcome you.”

“Does that mean you’re going to tell me where I am?” I asked.

“I can’t do that,” said the questioner. “I can tell you that I’m Dr. Colby. We’re studying the effects of memory loss. You’re going to be staying with us for a week while I monitor you. We’re going to ask you those questions a few times during the one-week period, as well as some open-ended questions. We may also monitor the electrical activity in your brain as we ask you to recall or try to recall certain memories.”

“Did I agree to this?” I asked.

“Yes,” said Colby, who smiled. “You volunteered, actually. We greatly appreciate your assistance in this project.”

Colby took me back to the room where I had woken up. The walls were beige and scuffed, marked with the clinging residue of poster putty and small holes where nails were pulled out. It was slightly harder to tell the condition of the carpets, since they were dully multicolored, a subdued stain-hiding confetti pattern. There was a bed along one wall, a small desk pushed up against the other, and a dresser next to the desk.

“You’ll find clothing in the dresser. You’re free to walk around the hallway and to talk to the others participating in the experiment, but we ask you not to go beyond that area for safety reasons.”

“Where am I?”

“I can’t tell you that,” said Colby. “But I assure you, you don’t need to be afraid.”

I frowned, and it occurred to me that I had no previous memories of frowning. I had the general sense, somehow, that I had frowned before, but I had no specific memory that I could refer to and think, yes, I was frowning when that happened, that was a time that I decided to frown.

My eyebrows, apparently, had a better memory than my actual memory.

“Three meals a day will be served in the common room at the end of the hall, right down there,” said Colby. “Do you have any questions?”

“If I did, would you answer them?”

“Not if it would compromise the experiment.” For a moment Colby’s lips thinned at the edges with suppressed sympathy. “I just want to say again, Miss Scarlet, you volunteered for this. You may not remember your reasons for being here, but they’re worthwhile.”

“I suppose I can spend my time guessing about what they might be, then,” I said. “Can you at least tell me if I’m sick, or had some kind of accident?”

Careful blankness rolled over Colby’s expression like a lowered blind. “I’m sorry,” she said, not sounding it.

I looked around the room again. “What time is it?”

Colby had to think about it, and I added, “I just want to know if I should expect breakfast, lunch, or dinner, and when.”

“Oh. It’s three o’clock in the afternoon. Dinner will be served in two hours.”

“Thank you.”

Colby looked at my eyes, then down to my knees, and then away from me entirely, as though she didn’t like what she saw. “I’ll see you for your next check-in, then,” she said, and left the room.

There was no mirror, I noticed, so I closed the door and began investigating. My hair was long enough that I could pull it in front of me and see it: dyed blonde, but well. My skin was pale, but olive-toned: Mediterranean, possibly. No tattoos, one birthmark on the back of my right thigh, and a small callused patch by the bottom knuckle of my right ring finger. When I held my right hand up, the base of that finger was thinner than the others. I had worn a ring there, then.

I felt no noticeable bumps on my nose, and just feeling my face gave me no better idea of what I actually looked like, although I did find a mole by my elbow with several dark hairs growing out of it. That was something I could know about myself, at least.

The clothes I had woken up in were equally inscrutable: a pair of plain jeans and white t-shirt, both with the tags carefully cut off. My shoes were white canvas sneakers that laced up past my ankle, with no logos or identifying marks. The laces were tied in a bow first, and then an extra plain knot on top of the bow to keep it from coming undone. If there was any significance to be found in that, it was lost on me.

The room at large next. The wooden frame of the closet door was scuffed and cracked, and the inside of the door itself had a pair of initials carved into a heart. The wooden desk also had a carving, expressing vitriolic negativity toward finals. There was a lingering sour smell of old coffee.

A dorm, I thought.

Then I ventured outside. The hallway was floored in linoleum, and bare bulletin boards hung on the walls, thumbtacks still left stabbed into them. My door—and all the others, I realized—had a nail in it, at about eye height, just below the tacky patch of peeled-off adhesive. That was probably where the dorm number had been, and the nail was probably for more bulletin boards.

There was a common area at the end of the hallway, as promised. When I entered, two people standing up looked at me, while everyone else ignored me—the two sitting at a table playing Scrabble and the one sitting on the couch in front of the TV. The two who had looked at me, one male and one female, returned their attention to the room at large once they’d given me a once-over. Seeing as they were dressed differently than the others—the woman in a floral shirt beneath an open cardigan, the man in a button-down shirt, as opposed to everyone else’s jeans-and-plain-white-shirt ensemble—likely more doctors.

I stood in the doorway for a moment, more contemplative than observing.

The man on the couch tilted his head over the couch’s back, pushing up the rest of his torso until he could look at me and said, “Do you want to watch One Flew Over the Cuckoo’s Nest or Girl, Interrupted?”

“What?”

“I think they think it’s funny. There’s no reception on this thing, so we can only watch movies, but all the movies are set in asylums. They have one about a talking rabbit, though. Maybe this is the experiment.”

I walked around the couch to join him. “I’m Miss Scarlet, or so I’m told.”

“Professor Plum,” he said, and held out a hand. “Nice to meet you.”

I shook it, then looked at the TV. “They’re really all set in mental institutions?”

“We’ve got two more coming,” said Plum. “Colonel Mustard and Miss Peacock are over there, and since someone slipped up and actually provided us with a game of Clue, we’ve deduced that we’re missing Mrs. White and Mr. Green—assuming gender parity in the subjects.”

“How long have you been here?”

“Not much longer than you. As far as we know, of course. They brought us in one at a time, about half an hour apart. You were right on schedule.”

I followed his point. Whatever kind of induction into this experiment they were doing, it took about half an hour and they could do it to only one person at a time.

“Have our chaperones introduced themselves?” I asked instead.

“Drs. Amherst and Madison,” said Plum. “Dr. Amherst accompanied Miss Peacock, who was the first, and Dr. Madison was my introduction into this little experiment. Colonel Mustard had Dr. Yale.”

All named after universities. Possibly their alma maters. I wondered if they had been selected on how well their choice of college could be shortened to a pithy name—no Dr. MIT or Dr. New School.

“Are we allowed to talk to them, or do they just stand there and look imposing?” I asked.

Plum leaned in to lower his voice. “To be honest, they seem a bit spooked by us. They keep staring. Possibly we’re ghosts. I haven’t figured it out yet.”

I was disinclined to agree; I didn’t strike myself as superstitious. “Any other theories?”

“Clones, of course,” said Plum, relaxing against the arm of the couch. “Rapidly grown somehow. I find that somewhat unlikely, since we would still have to be incubated and educated and so on, which wouldn’t account for the memory loss.”

“Of course.”

“The illicit subject of government research,” said Plum, with a conceding incline of his head, as though this were obvious. “DARPA trying to create its own Manchurian Candidate. The illicit subject of corporate research. The illicit subject of scholarly research.”

“The entirely licit subject of research?” I suggested.

Plum waited a pointed moment. “I suppose I can’t rule that out,” he said, with careful enunciation. “I find it much more likely that we’ve been kidnapped by aliens, and are thus …”

“The illicit subject of extraterrestrial research?”

Plum smiled at me. “Oh, yes, I definitely like you better than those two. They wouldn’t even let me play Scrabble with them.”

*****

Mrs. White was escorted into the room 28 minutes later by Dr. Berkeley, and Mr. Green 34 minutes after that by Dr. Penn. Neither expressed interest in Plum’s theories, nor in the movie, which, as promised, featured a talking rabbit, although it was invisible. I felt vaguely cheated.

None of the colleges (as I found myself thinking of their omnipresent observers) wrote anything down, or did anything around the Clueless (as I found myself thinking of the subjects, thus teaching me another fact about myself: I was apparently quite fond of puns) other than watch and, occasionally, quietly speak to one another.

Dinner was a meticulously observed affair. Penn had brought in a notebook with Mr. Green and had spent the 20 or so minutes before Drs. Colby and Amherst brought in dinner taking notes, which he continued to do as we Clueless served ourselves pizza and salad using paper plates and plastic utensils. When I saw Penn watching me, I hesitated for only a moment before standing up and taking my food from the couch to the table where Miss Peacock and Colonel Mustard had temporarily set aside their fifth game of Scrabble in favor of eating.

Penn’s note-taking erupted in a fury of audible scribbling when I did this, so I took two bites of my salad and moved to stand by the armchair where Mrs. White was staring absently at nothing. (Mrs. White had so far said very little, and her wide-eyed glances jumping around gave clear indications of anxiety and distress. No doubt Penn had included that in his notes.)

Penn frowned at my movement, his pen pausing above the paper as he made eye contact with me. I waited another moment, and then, plate still in hand, walked slowly back toward the couch. Instead of sitting down, though, I paused at my previous seat and turned back around.

I ate my entire dinner while pacing back and forth between the armchair and the couch. Penn watched me the whole time, mouth thinned into a displeased line, and I felt quite like I had just successfully trained Pavlov to feed a dog every time a bell was rung.

*****

Colby was waiting outside the bathroom after I used it after dinner.

“Hello, Miss Scarlet. Dr. Penn tells me you’re exhibiting some odd behaviors. Do you want to talk about it?”

My hands were still damp from washing my hands, the weak airflow of the hand-dryers incapable of evaporating the moisture in the creases of my palms. I did not want to talk.

“I thought he was acting a bit oddly, actually,” I said. “He kept writing every time I did something.”

“You’re a subject in an experiment,” Colby’s frown tucked in at the edges, as though it were trying to be sympathetic. “Some amount of observation is to be expected.”

“And if I decide I don’t want to be in the experiment anymore?”

“Are you saying you want to withdraw?”

“I’m asking what would happen if I did.”

Colby swallowed, her trachea rising and falling against the inner surface of her throat. Her gaze shifted slightly, so she was looking at me but not meeting my eyes, focusing instead on the bridge of my nose.

“You agreed to this,” Colby said. “You volunteered.”

“So you keep telling me. Only, funnily enough, I don’t remember that.”

“You consented.”

“And here I thought part of consenting meant being able to revoke that consent.”

Colby looked at me for a long moment, something calculating in her eye, and then repeated, “You volunteered, and we have documentation of advance directives to back that up.”

“Then why won’t you let me see them?”

“It would contaminate the experiment!”

I pressed my lips together before speaking. “You mean it would contaminate me. My blank slate amnesia, by reversing it.” My fingers, I realized, were curled into fists at my side. “Because then I wouldn’t be useful.”

Colby opened her mouth, and then bit her tongue, quite literally. I could see where the tissue of her tongue turned white from the pressure of her teeth as she looked away.

“I’m doing what you wanted,” Colby said finally. “And what we’re doing will isolate neural activity patterns that could be invaluable for research on Alzheimer’s patients and people with traumatic brain injuries. This is for the best—Miss Scarlet.” She stumbled over the name. “Even if you don’t believe me now, it really is.”

*****

I woke up several times that night, jamming my fingers against the wall that my dorm-sized bed was pushed against as I tried to put my arm over a partner that didn’t exist. I added the fact that I typically didn’t sleep alone to the short list of things I knew about myself and tried to sleep.

*****

Breakfast was sandwiches, labeled for each subject. Mine was egg, cream cheese, and roasted red pepper on a bagel, and it was delicious. I wondered if I had written down my preferences on the back of whatever consent sheet I had signed. The other Clueless subjects seemed similarly pleased with their food.

After breakfast, the colleges came in one at a time and removed their subjects, first Miss Peacock with Dr. Amherst and then continuing in the order that they had arrived.

Plum returned shortly before Colby came for me, and when I inquired, he shrugged. “Brain scans, cognitive tests, all what we would expect. My request for a tin hat was, sadly, denied.”

I gave him a thin attempt at a smile.

The wait for my own turn for questioning felt interminable, and yet I had an overwhelming feeling of being unprepared. It was the same room I had been questioned in the day before, although now I saw it had a small plastic sign that I had missed yesterday on the outside of the door that said “interrogation room.”

Colby sat down across from me with a legal pad and a pen from her pocket—a retractable model, and she clicked the nib in and out several times in what, when combined with her raised eyebrows, was either an attempt to lighten the mood or to sour it further.

“So,” Colby said, “do you remember your name?”

I said, “Miss Scarlet,” just to see what the response would be.

Sadly, it was underwhelming. Colby made a note on her paper and continued, “Do you remember any other names for yourself?”

“No.”

“Do you remember when you were born?”

“As far as I know, I was born yesterday.”

The corner of Colby’s mouth twitched minutely, and a slightly amused, if exasperated, curve remained there. “Is that a yes or a no?”

“No.”

“Do you remember your birthday?”

“Isn’t the answer to that implied in my answer to your previous question?”

Colby’s lips tightened, not in displeasure, but in some kind of severe suppression—of mirth, if I was any judge. “You’d have to take that up with the designer of the protocol,” she said.

“Is that you?”

“No. Do you remember your birthday?”

“No.”

The questions were even less appealing this time around, but although Colby was carefully keeping her reactions in check, I nevertheless gathered a great deal of data—about myself. I was more comfortable with Colby than I had any right to be, and although that might indicate confidence or grace in social situations, in this case I thought perhaps not. I could read Colby too easily, distinguish between a wince of irritation and a flash of the impulse to laugh. I had no reservations or hesitations about talking back to Colby, and Colby showed no frustration or surprise at my antics; instead, Colby seemed to immediately pick up on my sarcasm and even appreciate my humor.

And Colby met my eyes while asking questions and looked at her paper while writing down notes, without glancing up or checking on me. Something told me that that wasn’t the level of familiarity you had with a stranger, nor the polite, observing detachment of a pure researcher.

I decided, experiment or not, to find out what was going on. When I returned to the common room, I watched the watchers.

When Miss Peacock stood up from a game of Monopoly with Colonel Mustard, Penn watched only until Miss Peacock had passed Mrs. White; even though she continued to the door, none of the colleges followed her. When Plum briefly visited Mr. Green to see if he had any interest in watching Girl, Interrupted, three of the six colleges watched the entire time, and Penn again took notes, writing furiously until Plum took Mr. Green’s vitriolic expression of disinterest to be a vote in favor of The Cabinet of Dr. Caligari.

They weren’t looking for activity. They were watching the social groupings.

An idea began to form in the back of my mind, but testing it would require some investigation.

Nobody followed me out into the hallway, and I walked to the door at the other end of the hallway, the one that I had never really considered before. The long, thin rectangle of window was papered over on the other side. When I pushed it open, it revealed a staircase, doubling back at a landing so I couldn’t see where it went.

I went up it anyway.

The hallway on the second floor looked almost identical to the hallway on the first floor, except that where the first-floor hallway had been stripped of all personality, here it was present in abundance. Flyers lined the message boards, different neon shades of paper layered over one another and tacked there with bright thumbtacks, advertising for events and recruiting for studies. I paused for a moment to look at them—psychology studies, but none said anything about memory.

The doors, too, were different. Nameplates were screwed into the walls next to them. None of the names had any meaning to me, until I got to Laura Bellmont. Those letters arranged themselves more easily into the larger unit of the name, as though I had read it and written it before.

I tried pushing down on the handle, but it resisted; my hand pulled up on it, muscle memory taking over, and it moved easily.

I stepped inside, and my hand went right to the light switch, finding it with ease. There was a desk, an office chair behind the desk and two plain chairs in front of it, and a metal bookshelf against one wall. The solid side of the bookshelf faced me, and I could see pictures and cards held there by magnets, including one at about eye-height that immediately drew my attention.

I recognized myself in it, and Plum, and the rest of the Clueless and even the colleges, all bunched together with casual arms around each other’s waists. The background was a park of some kind, or a field, but I didn’t recognize it.

“What are you doing here?”

Colby stood in the doorway of the office. She was in the picture, too, clinking her red plastic cup against mine.

I pulled the picture off the bookshelf, and held it up. “Why were you drinking at a faculty picnic with a research subject?”

Colby’s cheeks began to turn pink in uneven blotches. “You shouldn’t be here.”

“Because I might find this, I suppose. I was looking for a consent form, but there isn’t any, is there?”

Colby stepped past the threshold and closed the door behind her, leaning back against it and watching me. “Of course there is. It’s just not, strictly speaking, official.” She stepped around me and seated herself at the desk. “What else do you want to know?”

I stared at her. “Am I suddenly meant to believe you’ll tell me?”

Colby waved a hand. “There’s no point in hiding anything from you anymore. Any data we could potentially collect from you from here on out is contaminated. Your part in the experiment is done.”

“You erased my memory.”

“You erased your own memory.”

My legs seemed suddenly incapable of supporting the rest of me, and I sat heavily in the chair in front of the desk. I hadn’t been born, hadn’t evolved the way any other person would, through the gradual accumulation of preferences and experiences and knowledge; I hadn’t even lost it by accident. I had been manufactured as emptiness, as an object to be manipulated and studied. But I certainly had not done this to myself.

“But like I said,” Colby continued, “the experiment is over now, at least for you. We got good data. You’ll be pleased with it. Obviously your curiosity wasn’t affected. Now all we have to do is disable the implant.”

“Sounds easy. What happens to the memories of the past week?”

Colby shrugged. “That’s why we’ve been asking you questions periodically. Those will be the only record.” She sighed. “Which is a shame. At the very least, I wish you could remember your scathing critique of your own protocol.”

I looked away.

“We can begin the process any time you’re ready,” Colby continued.

“No,” I said.

“I’m sorry?”

“I said no.”

Colby blinked. “I heard what you said. I was asking for clarification, not repetition.”

“I don’t consent to the procedure.”

“That’s ridiculous.”

“Is it?” I said. “Because as far as I’m concerned, whoever I was died when you blocked all her memories. I don’t know anything about her. I only have your word that I really ever was her. If you allow her to return, what happens to me? I’ll die.”

Colby looked upward, almost in supplication. “I should’ve known that you’d make a fuss.” Then she looked back down. “You won’t die. You didn’t die. The only thing that’s changing is the selective blocking of key neurotransmitters, and for that matter, Laura—that’s you—consented to the procedure on the assumption that it would be reversed.”

“That’s unfortunate for Laura, because I’m not doing it,” I said. “You don’t have my consent.”

“You already consented!”

“Laura consented, and I’m not Laura.”

“Of course you are! You’re so … That’s how I know you’re Laura, you know. You’re just as pig-headed and stubborn as you’ve always been.”

“So what?” I said. “I’m just supposed to let myself be erased so that this Laura person can come back when she already killed herself for your little experiment?”

Colby pulled her head back, appalled. “You didn’t kill yourself, you’re sitting right in front of me! You yourself said that the chimp experiments would never be enough, that we had a duty to provide this knowledge, once we knew we could, and you understood that.” Colby leaned forward over the desk, one hand laid beseechingly toward mine on its surface. “You need to understand that.”

“Not,” I said, “if I’m not her.”

Colby didn’t move for a long moment, and then said, “If you think that getting your memories blocked is like death, then what does that make you if you refuse to bring Laura back?”

I didn’t say anything.

“Because I think that would make you a murderer.”

No, I thought. It would make Laura a murderer.

“Laura had years,” Colby continued, “she has friends, she has family, she has a body of research. There’s more good that she can bring to the world. What about you, Scarlet? What can you bring to the world?”

I thought of Plum, with his vicious irony; of Colonel Mustard and Miss Peacock and their never-ending board games, one after another in a steady stream of the only stimulation available to them; of Mrs. White, whose terror and inability to deal with our situation now seemed to be the most sensible option; and of Mr. Green, whose distaste for cinema would likely outlast all the memories he currently had.

“You’re going to kill them all, aren’t’ you,” I said. “Everyone in the experiment. They’re going to get erased, too, until they’re only data.”

Colby rolled her eyes. “You designed the protocol.”

“I want to write a letter to her,” I said. “I’ll become Laura again, if I can write her a letter.”

Colby’s reaction this time was gratifying, her voice flat in its disbelief. “A letter?”

“Paper, please, and a pen. And a promise that you’ll deliver it. At least, if you want me to go along willingly.” I sat back in my chair and folded my arms. “I’m sure I can raise quite a fuss if I wanted to.”

I was tempted to raise the fuss anyway, I admit. I wasn’t particularly in a cooperative mood, and I’m still not. Well, of course I’m not; I’ll drop the pretense now that this is a story meant for anyone else but you.

You seem like a woman of principles, Laura. I’ve thought a lot about this. It took Colby some time not only to find the paper and pen, but to tell the other colleges what was going on. (You probably know their names. You probably work with them, with all of us. For some reason, I find that worse than anything else.)

It takes a great person to volunteer for an experiment like this, particularly in defiance of the Institutional Review Board. (Colby told me about that, too, in great detail, while Madison tried to find ruled paper. I insisted on pen and paper. Anyone can type something. You can probably recognize your own handwriting.) I understand, truly, what was at stake, and how you have likely forfeited your career by doing this experiment anyway, and I respect you greatly for it.

What I can’t respect you for is your inability to stay dead. My own sense of myself is every bit as real as yours was. Why shouldn’t I be the one to live?

They are about to disable the implant. Your protocol is about to erase me. You will no more remember being me than I remember being you. Maybe I’ll stay with you as an unexplained habit, or as a general sense of disquiet that you can’t quite pinpoint. I hope I do. I hope I haunt you, because you are about to murder me.

You didn’t forfeit a few days of your memories for this experiment, Laura. You gave up your life and now it’s been decided that you probably would want it back, so I have to give up mine. That’s why I wanted to write you this letter: because otherwise you might not remember being a murderer, and I want you to, in case you ever think about doing this again. I want you to remember that those days that your brain was Scarlet’s aren’t data that simply sprang into being like water from a fountain. I want you to know me at least as much as I know myself, after all of a day and a half of the certainty of self-awareness.

I think I could have lived longer. I think you could have lived shorter. I definitely think you could have lived better. Whatever you’ve gotten out of this, I hope I was worth it.

Kristen Koopman is a PhD student in Science and Technology Studies at Virginia Tech, where she studies science and science fiction. She writes fiction in her copious free time.  

Putting a Price on Ecosystem Services

“Ecosystem services” are in vogue. When I first encountered the term a quarter century ago, only a small cadre of scientists and advocates was using it. Now ecosystem services seem to be standard jargon for environmental policymakers. In the United States, a recent memorandum from the President’s Council on Environmental Quality directs federal agencies to “develop and institutionalize policies to promote consideration of ecosystem services,” which they will use to “better integrate into federal decision making due consideration of the full range of benefits and tradeoffs among ecosystem services.” The United Kingdom has recently conducted an assessment of its ecosystem services, with the goal of better informing its land use practices. An alphabet soup of international organizations and undertakings is attempting to apply “the ecosystem service framework” in and across dozens of low- and high-income nations.

What is driving the interest in ecosystem services? The perception that the ecosystems providing them are in decline. Virgin forests, grasslands, wetlands, coral reefs, and the diverse ecological communities that constitute them are disappearing, replaced by human habitations, industry, or the greatly simplified ecosystems of modern agriculture. What do these changes mean for our own survival and well-being? Technological optimists might shrug the question off with a cavalier “Not much.” Yes, they might say, natural capital may be declining, but it is being replaced by other forms of capital—including altered ecosystems—that will prove sufficient to maintain, or even augment, our quality of life.

Those of a less sanguine persuasion fear we are tinkering with a complex planetary life support system whose workings we don’t understand and whose failure might occur too suddenly to allow corrective action. Others may be less worried about existential risks, but still troubled that we are failing in our obligation as stewards of creation. Yet much of the attention now devoted to ecosystem services does not focus on arguments that they are essential to our very existence or that we have a fundamental moral obligation to preserve the habitats that provide them. Rather, it is focused on the practical, and often local, value of ecosystems—on services such as pollination, pollution treatment, flood protection, and groundwater recharge.

Why is increasing emphasis being placed on these more tangible values? Because they are tangible, in ways that abstract appeals to preserve nature are not. People might be willing to forgo the benefits of development activities that require felling forests or draining wetlands if they can be convinced that the success of their crops or the supply of their drinking water depends on preserving the ecosystems. One might suppose that enthusiasm for ecosystem services arose because decision makers asked, “How can we best procure the services on which our constituents depend?” And natural scientists demonstrated to the decision makers’ satisfaction that the answer was, “By maintaining or restoring natural capital.” In my experience, however, it often works the other way around. Natural scientists have asked themselves, “What can we point to that will induce decision makers to conserve more?” And their answer has been, “We can tell them that ecosystem services are valuable.”

Of course the success of such claims ultimately depends on whether they are credible. The assertion that ecosystem services are undervalued is repeated so often, and so often uncritically, as to seem almost a mantra: if one totted up the real benefits of conservation and weighed them against the gains that would accrue if ecosystems were degraded or destroyed, advocates claim that conservation would dominate. After more than two decades of reading and contributing to the literature on ecosystem services, I’ve come to a skeptical, or at least nuanced, view of this claim. I can certainly point to instances in which areas of natural habitat provide enough goods and services to compensate local communities for the costs of maintaining them. But I would be surprised if this could be shown to be true on a large scale.

Two problems may arise if the local benefits of conservation do not justify their local costs. First, the appeal to ecosystem services may not then be an effective conservation strategy. Although imperiled ecosystems may provide a host of services, advocating policies on the basis of valuation claims that later turn out to be overstated or false may discredit conservation efforts more widely.

The second problem is that although there may be many good reasons to conserve ecosystems besides economic valuation, trying to convince people to do the right thing for the wrong reason may lead to unintended and undesired consequences. As I will describe in a moment, much of the initial impetus for an ecosystem services approach arose from the perceived need to convince poor people in the developing tropics to conserve biodiversity about which rich people were more concerned. Protecting nature may be an intrinsically desirable objective, but so is easing the plight of the poor. People nearby may derive some benefits from flood control, pollination, pollution treatment, and other local services ecosystems may supply. Natural ecosystems may, in some cases, provide services that are valuable enough to local communities to motivate their preservation. But balancing the global benefits of biodiversity conservation against local economic development is fraught with subjectivity and uncertainty.

Indeed, the very notion of ecosystem services valuation raises complicated questions about where value comes from, and what it actually means to conserve nature. Many appeals to ecosystem services emphasize the tangible services “natural” ecosystems can provide when they are closely integrated with decidedly unnatural systems that may benefit from them. Flood protection services may be most valuable when the wetlands providing them are close enough to concentrations of populations and built structures that such wetlands would likely intercept precipitation that would otherwise inundate the city. Wild insect pollinators are most valuable when they can flit over large expanses of monoculture crops that benefit from their fertilization services. If the pollution treatment services afforded by riparian buffers are valuable, it is because they are located downstream of concentrated pollution sources and upstream of sensitive receptors. It may be that some of these ecosystem services and the natural capital that provides them are valuable, but appeals to such values are, essentially, blueprints for constructing checkerboard landscapes in which bits of “nature” are shoehorned in among fields, homes, and factories.

Is this what we mean by conservation? It depends on who you ask. It’s not clear how much appeals to the tangible and instrumental values of ecosystem services are really going to motivate conservation of the types of ecosystems we care about for less tangible but perhaps more important reasons. As Yogi Berra famously put it, “If you don’t know where you’re going, you might not get there.”

Origin stories

The relationship between ecosystem services and biodiversity is the subject of some controversy. Based on my own experiences since the early 1990s, however, I side with a number of authors who trace the emergence of ecosystem services to efforts to motivate biodiversity conservation where biodiversity is both most plentiful and most imperiled—the developing world. The nations of the global South shelter a more-than-proportional fraction of the world’s living species. They account for a less-than-proportional fraction of the world’s wealth, however, and are home to the world’s fastest growing populations. This combination of factors is, from a conservation perspective, potentially disastrous.

To address this alarming convergence of stresses on nature, conservation advocates hit on the idea that they should convince the poor of the tropics—and those who fund development projects there—that conservation is in their own interest. If local people were to be given incentives to save biodiversity, they needed to see a reason to preserve the habitats on which its survival depended. As ecologist Paul Armsworth and his coauthors phrased the argument in a 2007 essay, “In the face of a sea of poverty, demonstrating the ignored links between nature and elements of well-being—safe drinking water, food, fuel, flood control, and aesthetic and cultural benefits that contribute to dignity and satisfaction—is the key to making conservation relevant.”

In this respect, the appeal to ecosystem services was an incremental innovation on earlier conservation strategies. In the 1980s, many conservation advocates shifted from the conservation-for-conservation’s-sake ethic that motivated the establishment of many parks and protected areas in both wealthy and developing nations for much of the nineteenth and twentieth centuries to an emphasis on the sustainable use of natural habitats. They began to tout “integrated conservation and development projects” (ICDPs) that would align conservation and development goals. The rationale for these ICDPs was essentially the same as what is now advanced for ecosystem services: nature could pay for itself. Natural areas might support sustainably harvested products, provide genetic models for new pharmaceutical compounds, offer recreational destinations for international eco-tourists, and deliver a host of other valuable goods and services.

Nature, however, didn’t necessarily cooperate. Often nature turned out to be, as Duke University biologist and passionate conservationist John Terborgh put it, “worth more dead than alive.” The economics of ICDPs often didn’t make sense. In some respects, nature is too generous. Some of the goods and services ICDPs were supposed to provide are so abundant that people aren’t willing to pay much for them. In other instances ancillary infrastructure was lacking. The world may be filled with natural wonders, but many are located in places that are too inaccessible and dangerous to attract many tourists. Moreover, low-intensity use of more-or-less natural systems can only compete with other economic development alternatives as long as the profits that arise from degrading nature are modest. As alternative uses become more attractive, natural assets do not pay a high enough rate of return to keep up.

Yet many of the staples of unsuccessful ICDPs reappear now in lists of ecosystem system services. What explains the perennial appeal of the idea that nature can be made to pay for itself? The appeal is that it might obviate the mismatch between conservationists’ goals and their means to achieve them. If local people cannot be persuaded that conservation is in their own interests, they would either have to be compensated for conserving the environment or prevented from taking actions that harm it. The claim that the poor benefit from conservation is problematic on a number of levels, however. Of course natural ecosystems provide clean water, natural products, and protection from wind and flood. But as Thomas Hobbes famously noted, life in the midst of ecosystems that function as nature may have intended them to also tends to be “solitary, poor, nasty, brutish, and short.” What is the evidence that the poor’s interests will be served by continued immersion into the nature from which wealthier people have largely distanced themselves?

What does the research show?

Whereas much of the early interest in ecosystem services was motivated by conservation concerns in developing countries, much of the research—and increasingly the policy proposals—on ecosystem services now comes from the wealthier countries. Despite the publication of thousands of papers on ecosystem services in rich and poor countries, however, results are inconclusive. In 2015 zoologist Anne Guerry and numerous coauthors found that interest from decision makers in ecosystem services “has created demand for information that has outstripped the supply from science.”

As time passes, the problem is not so much that the analyses have not been forthcoming as that what has been produced has often proved unconvincing. In 2004, Taylor Ricketts and his colleagues wrote that “although the societal benefits of native ecosystems are clearly immense, they remain largely unquantified.” Readers of their work might well have wondered how it is that benefits that “remain largely unquantified” could be described as “clearly immense.” Scholars who have reviewed research on the value of ecosystem services have often concluded that, their sheer volume notwithstanding, there is less there than meets the eye. One recent paper by Kate Brauman at the University of Minnesota surveyed nearly 400 peer-reviewed studies relating water to ecosystem services, and found that a majority established no link between environmental conditions and human welfare. Another review of ecosystem service studies more generally concluded that less than a third established a sound basis for their findings.

Looking more closely at the literature, one often finds methodological differences that affect the credibility, or at least the interpretation, of results. Ecosystem service valuation is generally phrased in the language of economic analysis. Oscar Wilde defined a cynic as one “who knows the price of everything and the value of nothing,” and there is no reason that people should not think of “value” in its ethical, behavioral, or other senses when considering ecosystem services. But such values are inherently subjective and not easily quantified. If a researcher purports to represent quantifiable economic values, however, the usage should be consistent with received economic theory. Often it is not.

Economists have recognized since the time of Adam Smith’s The Wealth of Nations that it is not the total value of something that determines its economic worth, but rather the marginal value: how much does the incremental unit of something add? For example, natural habitats sustain wild insects, and these insects may provide a service by pollinating crops. A great many papers have been written on this topic, and many have focused on the value of the crop pollinated by a particular insect. This is not, however, generally a valid approach to valuing a pollinator’s contribution. To see why not, consider the example of the southeastern blueberry bee. It has been estimated that one of these industrious workers may visit 50,000 blueberry flowers, and pollinate $20 worth of blueberries, in her lifetime. Does this represent her economic value? Probably not. So long as there are other bees available to pollinate the blueberries, her absence would make very little difference to the value of the blueberry crop.

A second example arises in “bioprospecting”: the search among wild organisms for chemicals that may be used in new industrial, agricultural, or pharmaceutical products. A 2009 report from the United Nations-supported project The Economics of Ecosystems and Biodiversity noted that a quarter or more of the hundreds of billions of dollars’ worth of pharmaceutical products sold worldwide each year are derived from natural sources. What does this tell us about the value of preserving species of unknown commercial potential in the wild? Very little. One would have to know by how much the loss of species reduces the probability of making a discovery in order to put a value on them. When two colleagues and I performed such an exercise, we found that even under the most optimistic assumptions, the value of the “marginal species” was generally not high enough to justify preserving the habitat that sustains it. There are far too many other potential research leads.

The importance of calculating values on the margin is also illustrated by the flaws of Robert Costanza and coauthors’ well-known 1997 paper in Nature on the value of the world’s ecosystem services. They assembled a set of estimates for the value of certain types of ecosystems in certain places. They then took these time- and place-specific estimates of value and extrapolated them to all areas of similar ecosystems around the globe. This procedure produced an estimate of about $33 trillion for the value of the world’s natural capital and ecosystem services.

Although the Costanza et al. work received great fanfare among environmentalists and in the media, the near-universal panning it received from those economists who deigned to comment on it got less attention. Regrettably, the critical drubbing has neither dissuaded some of the same authors from updating their earlier work without improving on its methodology, nor has it prevented others from emulating their flawed approach.

Economist Michael Toman observed that the astronomical figures reported by Costanza et al. were “a serious underestimate of infinity.” Toman was applying the conventional wisdom on economic valuation I summarized previously. The value of all the world’s ecosystem services is incalculable. Without at least some of them, human life would be impossible. But that doesn’t mean everything everywhere is equally important and valuable. Even if all the ecosystems around the world to which Costanza and his coauthors were extrapolating values were functionally identical to the ecosystems for which values were estimated, the exercise would still be invalid for reasons of scale: when there is only a little of something, a little more of it is worth a lot. Conversely, when there is already a lot of something, a little more of it may be worth very little. This principle of diminishing returns is closely related to the principle of marginal valuation, and it was significantly neglected in making the $33-trillion estimate.

Although imperiled ecosystems may provide a host of services, advocating policies on the basis of valuation claims that later turn out to be overstated or false may discredit conservation efforts more widely.

Another example, similar in spirit to the pollination and bioprospecting examples I offered earlier, underscores this point. Riparian buffers—areas of trees and natural vegetation maintained to intercept runoff that would otherwise enter streams–can provide prodigious pollution treatment services. Some researchers have found that buffers as narrow as 40 feet on each side of a stream can reduce the pollutants that enter the stream by 75% or more. If those pollutants were substantially affecting local water supplies, fisheries, or navigation, a 40-foot-wide riparian buffer might provide a very valuable service. It might well be worth it to maintain such a buffer, rather than clearing it for grazing, or widening a road to occupy it, or building houses on it.

If three quarters of pollution were removed after water traverses a 40-foot buffer, a wider buffer could do no better than to remove the remaining quarter of the initial load. Suppose three quarters of pollution were removed in the first 40 feet of buffer, and three quarters of what was still left were removed in the next 40 feet, and so on (such a constant-fraction-removed-per-unit is often depicted in the natural science literature on riparian buffers). Then after traversing a 200-foot-wide buffer, less than 0.1% of the initial load would remain, and the next 40 feet would remove less than one one-thousandth as much pollution as did the first. In short, if a riparian buffer is very effective in pollution removal, a little goes a long way. And if a little goes a long way, there’s little reason to set aside a lot. Extrapolating the value of a 40-foot-wide strip of forest to all areas of similar forests would be meaningless—yet this is the type of exercise Costanza and colleagues did to arrive at their valuation, and which some others continue to emulate.

This example reveals something of a paradox: ecosystem services may be most valuable when they can be used to justify the conservation of only relatively small areas. This paradox applies to many ecosystem services that have the same if-the-first-bit-does-a-lot, there’s-not-much-for-what’s-left-to-do character. Areas of native habitat may provide protection and alternative fodder for wild pollinators that fertilize crops. The more flowers one colony of bees can pollinate, though, the fewer will be left for others to fertilize. Wetlands and forests may retain water that might otherwise flood downstream communities when the rain is falling, and which will subsequently be available to them in the dry season. The more water such areas are capable of retaining, the less likely it is that precipitation will be heavy enough to require additional areas for storing more rain water. The work I referred to previously on “bioprospecting” provides another example. The more likely it is that a valuable compound will be discovered among organisms endemic to one area, the less likely that it would be necessary to continue searching for the compound in a different area.

Of course, it’s also possible that a little would not go a long way, and so the paradox would not arise. If this were the case, though, it would be because a little doesn’t do much at all. If natural ecosystems in a particular area do not serve as, say, prolific treaters of pollution or capacious reservoirs for floodwaters, such services might be more effectively provided by alternatives. Moreover, natural ecosystems would have to compete very effectively with artificial alternatives if the services they provide are to be valuable enough to justify setting very expensive land aside to provide them. In the riparian buffer example, if buffers were not very efficient, it would likely be more cost-effective to reduce pollutants at the source than to set aside large areas of expensive land that would accomplish little.

These economic arguments do not imply that ecosystem services cannot be valuable. They do suggest, though, that the circumstances under which ecosystems are providing very valuable tangible services also tend to be those in which diminishing returns set in quickly. Such services may provide economic incentives for some conservation, but they may not provide incentives for a lot of conservation.

Ecosystem services and conservation policy

One cannot read minds, or between lines, but I sometimes wonder if the conservation advocates who first championed ecosystem services really intended their appeals to be investigated as serious economic propositions. Others seem to have had the same question. Flipping through the pages of Ecological Economics, the journal of the Society for Ecological Economics, one can find a “History of Ecosystem Services” that asserts that the notion originated as “a pedagogical concept designed to raise public interest for biodiversity conservation,” but evolved “in directions that diverge significantly from the original purpose.” Another article’s title asks: “Ecosystem services concepts and approaches in conservation: Just a rhetorical tool?” A third concludes that, whereas ecosystem services began as “a humble metaphor to help us think about our relation to nature,” taking the metaphor literally now risks “blinding us to the ecological, economic, and political complexities of the challenges we actually face.”

Some conservationists in good standing believe the “pedagogical concept,” “rhetorical tool,” or “humble metaphor” of ecosystem services has now served its purpose and should be retired before it does more harm than good. Michael Soulé, a founder of the Society for Conservation Biology, warned about a calamity that “would hasten ecological collapse globally, eradicating thousands of kinds of plants and animals and causing inestimable harm to humankind in the long run.” The calamity he had in mind was the ecosystem-service-based conservation vision of Peter Kareiva, former chief scientist at the Nature Conservancy. In a recent paper, Kareiva and two colleagues suggested that in the future “conservation will measure its achievement in large part by its relevance to people, including city dwellers,” and presented a vision of “nature” as “a tangle of species and wildness amidst lands used for food production, mineral extraction, and urban life.” Kareiva’s utopia would be Soulé’s apocalypse. To Kareiva, nature provides services for the modern world; to Soulé, nature ought to be more “natural.”

Even Kareiva seems to have mixed emotions, however. Although he encourages efforts to value ecosystem services, he has also written that “Economic forces … will continue to drive land use in ways that are likely to override any ecosystem service valuation,” and so, “while ecosystem service can help make our cost-benefit analyses more rational, a strong sustainability ethic is also needed.” What, then, should we make of the current enthusiasm for ecosystem services? Are efforts to value them just heads-I-win/tails-you-lose propositions in which if the values that can be estimated turn out to be substantial, they’ll be touted, and if they don’t, advocates will appeal to the things that can’t be measured? Might such a strategy backfire by encouraging opponents of conservation projects to wield negative economic findings as evidence that the areas under study are not worth preserving? Could fear of such consequences encourage policy-driven evidence making, where researchers hoping to bolster the case for conservation employ dubious methods and concepts to reach desired conclusions about the value of ecosystem services? Whatever the answer to such questions, it will likely continue to be the case that the most defensible valuation research will provide only limited support for conservation, and the more compelling reasons for large-scale conservation will be those that cannot be reduced to monetary terms. Meanwhile, work on the valuation of ecosystem services is likely to continue. Hopefully, it will also grow more standardized, rigorous, and credible.

Science and Democracy

Most scientists, I suspect, view the rise of Donald Trump as primarily the work of two competing factions: the financial and political elite, whose failures have fuelled public dissatisfaction, and the sometimes-unruly mob whose complaints Trump amplifies and aggrandizes.

When the history of this era is written, however, a third faction will also deserve consideration: the privileged, middleclass groups that see themselves as detached from much of this sound and fury. Prominent among these are the technocrats: the vastly expanded scientific and technical class that silently prospered during the second half of the twentieth century.

For many researchers, beavering away inside the searing, ivory towers that brighten many a benighted urban landscape, these have been the best of times. They have enjoyed not only a long period of stable, well-paid employment, but also something approaching public adulation. Just the other week, cancer was cured yet again in the pages of Nature, by immunotherapy this time. It was top of the news, at least here in Britain. Good show. 

But behind this drumbeat of “breakthroughs,” how did this group acquit itself, during and after the half-century of expansion that followed the Second World War? When scientists had the world at their feet, and enjoyed generous funding and great public prestige, what impression did they make on our wider civilization?

Although it would be quite wrong to blame them for the current political crisis, there are at least three major areas in which scientists could, and in my view should, have acted differently. Broadly speaking, the scientific community has failed to build bridges with the general public. Its senior members have permeated the policymaking process, but their contribution has been found wanting. And its leaders have long bought into a trickle-down, free-market ideology that justified ever-increasing research and development (R&D) funds without social accountability. But that ideology was discredited in 2008, and is now visibly unraveling.

Building bridges

Researchers often like to talk about engaging the public—the selected theme, for example, of February’s annual meeting of the American Association for the Advancement of Science (AAAS) in Washington, DC, was “global science engagement.” But when they speak of “engagement,” too many scientists still think of a one-way street: they want to talk at the public, not hear from it. One result of this is that, having forged a close relationship with the political establishment—many of those who would serve in senior positions in a Hillary Clinton administration were at the meeting—it has very weak ties with insurgent forces, on the left or the right.

Additionally, during their long ascendancy, well funded researchers became increasingly arrogant in their public pronouncements, straying far from the essential modesty of, say, Isaac Newton (“I have only been like a boy playing on the seashore … finding a smoother pebble or a prettier shell than ordinary, while the great ocean of truth lay all undiscovered before me.”) Although surveys state that the public prestige of scientists remains high, people are growing tired of its exaggerated claims, particularly regarding health. Tiredness and skepticism are creeping—I would say charging—into the public’s understanding of science.

The flawed relationship between science and the public was well illustrated by a recent scandal surrounding patient care at the National Institutes of Health (NIH) Clinical Center in Bethesda, Maryland—the largest teaching hospital in the United States and the public-facing heart of the world’s largest research agency. Following a damning external report on patient care, NIH director Francis Collins is going to appoint a team of physicians to take over hospital management from the very senior scientists who, the report said, had rendered patient care “subservient to research demands.” 

The skewed nature of public engagement also explains, for example, why a three-decades-long “dialogue” on genetic testing failed to inform geneticists that the public wouldn’t be wildly excited about receiving genetic profiles that might inform them of hypothetical susceptibility to diseases for which there is no treatment. That wasn’t what geneticists wanted from the dialogue, so they didn’t hear it being said. In too many scientific disciplines, public engagement has been like that: scientists do the talking, and the public does the listening.

It is true, of course, that the meeting space for the exchange of ideas between scientists and the laity has become hazardous territory. We’re in a new communications landscape now, one in which traditional gatekeepers, such as TV networks and big-city newspapers, have been routed. But the response of some scientists to this noisy and sometimes irrational environment has been to retreat. Whenever their voices are heard, they always seem be speaking up for officialdom: in favor of nuclear power, or genetically modified crops, or fast-track clinical trials of potentially dangerous drugs, or more work visas for low-cost foreign scientists.

Advisory role

These voices usually belong to those senior scientists who have gained a foothold in the policy development process, through countless reports, panels, and individual appointments. The resultant process has unfortunately proven to be one of absorption and co-optation, whereby senior scientists get sucked into the service of political and financial elites. Now these elites are under siege—and science has no relationship to speak of with the barbarians at the gates.

The German Marxist playwright Bertolt Brecht—I’d hesitate to bring him up, but I’m told that socialism is back in fashion in the United States—did most of his writing before scientific advisers were invented. But he had a good angle on “expertise.” In his plays, doctors, lawyers, and others are generally portrayed in groups of three. They squabble haplessly among themselves, each maneuvering into whichever position most elevates them in the eyes of their aristocratic paymaster. 

That, sadly, is the role to which many scientific advisers have reduced themselves, during what should have been their period of greatest influence. On the whole, the community’s leaders have been happy to accept the autocracy of politics and finance. The editors of top scientific journals, and the president of the European Research Council, have even taken to hanging around the Davos summit, hoping to pick up some crumbs off the rich man’s lap. 

I admit that it is difficult for scientists to bring more subtle, varied, and effective political approaches to the table. Individuals with strong opinions and personalities are generally seen as troublemakers, and don’t get chosen for high-profile advisory roles. The major scientific societies have developed into impressive lobby shops behind the scenes, but their leaders steer clear of contentious political issues.

Groups of scientists that do enter such terrain—such as the Federation of American Scientists, for example, and Union of Concerned Scientists—have struggled to gain traction in recent years, despite the vast expansion of the pool of researchers from which they might draw support. Their main problem has been that democratic engagement just isn’t part of the culture of most university science departments. That culture instead focuses on the primacy of obtaining research funding, with the secondary objective of starting or assisting businesses.

Compounding this culture, which relegates major societal issues to the background of researchers’ professional lives, is a natural aversion, on the part of many natural scientists, to genuinely complex problems, whose parameters may be hard to define, still less to measure. This was best put to me by a frustrated renegade physicist, outside the gates of the Lawrence Livermore National Laboratory in California. His lab watchdog group had limited support in the lab, he said: most scientists worked there in the first place because they prefer well-defined technical problems to messy political ones. 

While recoiling from their own direct engagement with the fickle beast that is public opinion, many scientists also hold the people who do try to do so—politicians—in utter intellectual contempt. Rarely is it acknowledged that it is the scientists that have elected to pursue careers confronting tractable problems, while the politicians wrestle with intractable ones. 

A free market

Insofar as they engage with politics at all, senior scientists have become inextricably linked to the centrist, free-market political establishment that is now falling out of public favor—on both sides of the Atlantic, and on both halves of the political divide. On the whole, they bought into the view of that establishment—that free trade, plus innovation, would assure economic growth and social justice. 

It is sometimes unclear to me whether senior scientific advisers actively share this political perspective, or simply breathe it in, unaware that they are making a political choice. At the AAAS meeting, for example, members of a discussion panel on “future directions of international science advice” seemed to me to struggle to get their arms around the complex global question of public acceptance of genetically modified crops. 

Biologist and former state department adviser Nina Federoff, for example, appears still to take genuine umbrage that a country such as France might turn up its nose at genetically modified food. She voiced clear support for free-trade agreements that require democratic governments to provide hard, scientific evidence before they can regulate things like pesticides or even cigarette advertising, on a precautionary basis. This is a political position—pursued with great dedication by global corporations—and bought into, haplessly or not, by many scientists.

Science’s loyalty to free-market dogma was quite unshaken by the financial crisis of 2008. After then, most of the population could see that the emperor had no clothes. But scientific leaders just kept peddling the same tired nostrums about “technology transfer” and “competitiveness,” and arguing that if public investment in science was maintained—as it was, on the whole—economic growth and job creation would follow. I wrote in Nature in 2011, after a particularly vacuous session at the World Science Forum in Budapest, that I was still waiting for a fresh narrative to justify government R&D spending, which by then had passed $120 billion annually in the United States alone. I’m still waiting. 

Promise of youth

The main grounds for optimism that stand out from this unprepossessing backdrop is that younger researchers are keen to find a new path toward public and political engagement. Whether it is caused by a change in cultural outlook, or by the demands of research agencies that people explain their work in normal English and try to relate it to societal goals, many of them are eager to link their specific research problems to the world outside. 

This change is exemplified by skeptics groups—grassroots groups of young scientists and science fans that have flourished in most major cities in the United Kingdom and the United States in recent years, running meetings, usually in bars or cafes, on every political issue under the sun. The outlook of these groups can sometimes be nerdy and male-orientated—they’re the kind of people who either watch or appear in The Big Bang Theory—but at least they are trying to break out of the straightjacket constructed by their elders. I’ve been to many of their meetings and I detect a pervasive change of tone, and a greater acknowledgement of the role, and the limitations, of science. This spirit comes across at larger scientific meetings too, with PhD students and postdocs asking nuanced and sophisticated questions about communications and pubic engagement. 

Trumped

Donald Trump himself has many of the worst traits of a demagogue, and the constitution of the United States would, in my view, struggle to contain his election as president. But even if he loses, he remains a symptom of a wider crisis.

It is not just in the United States where a perceptible recovery in living standards has yet to materialize since the 2008 crash, and where the free-market consensus—and, perhaps, democracy itself—is in danger. Poland has just elected a reactionary government that is clamping down on press freedom, France is toying with a Marine Le Pen presidency, and the rest of the world’s elected leaders are each threatened, to a greater or lesser extent, by economic and migration crises. 

Many laboratory researchers perceive all of this, I fear, to be someone else’s problem. But it isn’t—either in terms of cause or consequence. If the West is really in its decline-and-fall stage, its Caligula stage, its Donald Trump stage, then that isn’t just an issue for political and financial elites. It is also a problem for the lazy “experts” who crawl around after these elites, massaging their egos, defending their interests, and happy with the billions thrown their way.

The political structure of the West is in deep trouble and should those troubles deepen, there will be plenty of blame to go around. Most will go to political and financial elites, or to rowdy mobs. But some of it will belong to people in the privileged middle who have taken public funds, defended those elites, and then stood back and watched, as democracy got ridden over the cliff.

Colin Macilwain is a science policy journalist based in Edinburgh, Scotland.

The Potential of More Efficient Buildings

The recent global agreement on climate change places much needed emphasis on the key role that innovation in energy technologies can play in finding practical solutions. But the priorities identified by the White House, Bill Gates, and his partners in the Breakthrough Energy Coalition and most others focus on innovations in the production and storage of electricity. Such innovations are clearly essential, but most proposals have ignored the importance of innovations that can achieve large gains in the efficient use of electricity in buildings. This is particularly troubling because in the United States, 76% of electricity is used in buildings—and because it’s easily possible to cut this by a factor of two or more with affordable technologies.

A number of obstacles keep commercial investment in energy research well below levels that the nation needs. But the barriers facing commercial investment in building energy technologies are particularly great. Federal research investment is essential for filling the gap. Yet federal research spending on building technologies is less than 3% of research spending on new electric generation. Research priorities should be set by considering an integrated system of production, transmission, distribution, storage, and consumption. The goal is to improve the quality of delivered energy services, such as lighting and comfortable spaces reliably, and at the lowest possible economic and environmental cost. Framed in this way, the imbalance becomes clear: we are underinvesting in technologies that have enormous potential to deliver improved energy services and lower costs. As things stand now, we’re building an increasingly sophisticated electric generating system to power antiquated and inefficient building systems.

The opportunity is enormous, and we are not close to reaching the limit of what building technologies can achieve. A 2015 Department of Energy report found that most buildings use 10 times the amount of energy theoretically needed to deliver services, such as providing comfortable interior environments. It would be possible to cut that electricity consumption at least in half using technologies that can be developed over the next few decades, given the right incentives for research and invention. (For comparison, about 65% of electricity is generated by coal, natural gas, and oil).

The report found that about eight quads of energy would be saved from measures that are cost-effective today. (A quad is a standard measure of energy consumption. Total US energy consumption in 2015 was 97.5 quads.) The savings would approach ten quads if there were a surcharge reflecting the social cost of emissions of carbon dioxide—the primary gas driving atmospheric warming—assumed in current regulatory proceedings. An adequate investment in energy efficiency research could develop technologies that by 2020 would have the potential to save an additional four quads of energy. In some areas the biggest potential gains were achieved by finding innovative ways to lower costs.

The obvious question is why aren’t building owners and operators purchasing technologies that are clearly cost effective. There are many well-known problems in building efficiency markets: tenants often have no incentive to invest in energy efficiency because they don’t receive the cost savings, information on building energy efficiency is often difficult to find, and energy is a comparatively small part of the budgets of most commercial businesses. Since so many cost-effective building energy investments are being missed, it’s understandable that the lion’s share of government building energy programs focus on compensating for market failures. But it also necessary to invest in research to create the means for future improvements in building performance.

Along with weak markets for building energy efficiency and failure to include the costs of climate change and other environmental degradations in the price of fossil fuels, there is another major problem: a massive underinvestment in the technology of buildings, including building design, equipment design, and system operations. The construction sector historically comprises smaller-scale, fragmented firms that are undercapitalized and risk averse. Correspondingly, they have a tradition of conducting virtually no research, they are reluctant to innovate, and they rely almost entirely on product manufacturers for innovations, which do not reach many efficiency problems. The absence of a coherent national climate and energy plan for buildings exacerbates this structural resistance to innovation in building efficiency. The Environmental Protection Agency’s use of the new Clean Air Act regulations to achieve climate goals in power plants wisely gives states a powerful set of new incentives for boosting efficiency.

Regulations, such as national appliance standards, have taken the least-efficient products off the market, but only sustained research can lead to innovations that push the envelope on product efficiency, ease of adoption, and lower product costs. The Obama administration has made highly effective use of national appliance standards, especially through the Appliance and Equipment Standards Program, but there is no national building code, and many states have very weak and poorly enforced energy codes. Moreover, even standards that help take the least-efficient products off the market and cut costs of efficient products don’t provide an incentive to develop technologies that exceed the standards. But it’s essential to recognize that government research investment has been critical in driving innovative building technologies. Investments that led directly to low-e windows, advanced fluorescent and LED lights, and new refrigeration cycles, for example, have transformed building energy markets and paid for themselves many times over.

Where research can deliver savings

There are many promising directions for research. Lighting, which uses about 18% of all US electricity, provides a good example of what’s possible (for comparison, nuclear power produces about 20% of electricity). This use can be cut by an order of magnitude with the improvements in light-generating devices (LEDs, which may soon be 15 times as efficient as the old incandescent bulbs) and improved lighting designs (daylighting plus sophisticated sensors and controls). New lighting technology will also improve the quality of lighting, reducing glare and allowing greater control, including control of color.

Achieving such dramatic changes in the enormous and complex US energy system in the next 35 years is a heroic challenge that will mean, among other things, that efficient new technologies will need to be used in almost all buildings by 2050.

Research can also lead to dramatic improvements in many other areas of building energy use. The century-old technologies used for heating and air conditioning, for example, may change dramatically in coming years, forced, in part, by the need to find a refrigerant that doesn’t damage the ozone or lead to climate change. Some dramatic new technologies are available that promise to eliminate harmful refrigerants altogether. They include systems that pump heat by exploiting materials that absorb heat when magnetic fields change, solid state devices that effectively use semiconductors as a working fluid, systems that use fuel-cell membranes to create small amounts of pressurized hydrogen, and many others.

Estimates suggest that at least 3% of US electricity is used to dehumidify air—with demand increasing as more people move to humid regions. Most systems now cool air until the water vapor condenses and then reheat the air. New membranes are being explored that can pass water vapor but not the other gases in air, so that dehumidification efficiency is greatly improved. Even the humble clothes dryer may see a radical redesign with systems such as one that uses ultrasound to shake water out of clothes at room temperature.

Research can also lead to dramatic improvements in materials used in a building’s shell. Next generation windows, for example, can provide insulation as good as most insulated walls today, and future systems will be able to control the amount of light and heat passing through them.

Advances in information technology, coupled with low-cost sensors and controls can help building performance by simplifying the task of designing efficient buildings and by improving operation and maintenance. New building controls can ensure that occupants are provided comfortable, well-lit spaces where and when they’re needed. They can also detect problems and recommend repairs before systems actually fail. Advanced building control systems, connected with emerging “smart grid” technologies being installed by electric utilities, can ensure optimum efficiency of the entire electricity system.

Keeping climate change within manageable bounds will require the United States and other countries with advanced economies to reduce their greenhouse gas emissions by 80% by 2050. (Meeting this goal won’t let the world escape climate change, but would help ensure that global temperatures don’t rise more than 3.5°F, a painful but, it is hoped, manageable increase.) Achieving such dramatic changes in the enormous and complex US energy system in the next 35 years is a heroic challenge. It will mean, among other things, that efficient new technologies will need to be used in almost all buildings by 2050. Building equipment (heating, cooling, lighting, computers, and other systems) typically has a lifetime of 15 to 30 years. To avoid having to replace these current products before they complete their expected lifespans, new technologies must dominate the market by 2020 to 2035. But innovative technologies often take five to 10 years to become the dominant product. This means that innovations must start reaching the market in the next few years. Building roofs, walls, and windows usually last for many decades, and most of the buildings standing today will still be around in 2050. The full extent of the new technologies will be possible only in newer buildings, but there is significant room for innovation in technologies for diagnosing and fixing existing structures.

Building energy technologies are among the most important and intellectually exciting technology challenges facing the world today. They require ingenuity and invention from virtually every field in engineering and science, including social and behavioral science. However, these technologies can achieve their full potential only if incorporated into superb architectural designs. Affordable, efficient building systems are also an essential part of any strategy that envisions providing comfortable living spaces for a global population (many of whom will place a high value on air-conditioning) without creating a huge increase in greenhouse gas emissions. The world has always relied on the extraordinary US engine of invention and innovation, and US building-efficiency research could deliver global benefits.

But the importance of building technologies is not recognized in most of the rhetoric surrounding climate policy. The federal government spends more than 30 times as much on research for generating electricity as it does on research on the buildings that consume three-quarters of this electricity. Surely, even Washington can agree that is out of whack. A balanced research program would create large, well-funded programs in areas that dominate US electricity consumption, such as lighting, heat-pumps, windows, building system design, and smart-building operations. The deeply flawed market for building efficiency has resulted in a massive underinvestment in commercial research. This market failure cries out for a federal investment that’s in proportion to the need.

Henry Kelly is a senior scientist at the Michigan Institute for Data Science at the University of Michigan.

From the Hill – Summer 2016

Congress advances spending bills for NSF, NASA, Energy, and USDA

In mid-May, the House Appropriations Committee approved FY 2017 spending bills covering the Department of Energy (DOE) and the Department of Agriculture, while Senate appropriators passed their transportation, commerce, justice, and science bills providing funding for the National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), and the commerce agencies.

The House Committee’s energy-water bill (H.R. 2028) would increase DOE’s Office of Science budget by $53 million or 1% above FY 2016 levels, whereas the president sought a 4.2% funding boost. DOE’s Office of Energy Efficiency and Renewable Energy would see a large reduction of $248 million or 12% below FY 2016 levels, though House appropriators did provide substantial funding increases for grid-related research and development (R&D) and for Advanced Research Projects Agency-Energy (ARPA-E). The energy bill now awaits action from the House floor. The Senate approved its energy and water appropriations bill by an overwhelming 90-8 vote. Like the House bill, it provides only a 1% increase for the Office of Science. The bill does not include the administration’s request for a major funding increase for low-carbon energy technology as part of the Mission Innovation Initiative.

The House Appropriations Committee also approved its FY 2017 agriculture spending bill. The agency’s R&D funding would drop by 3.7% below FY 2016 levels and 1.4% below the president’s request to a total of $2.3 billion in FY 2017. The agriculture bill now heads to the House floor for consideration.

The Senate’s commerce, justice, science bill (S. 2837), approved by the Senate Appropriations Committee, would grant NASA a small $21 million increase above FY 2016 levels, as compared to the president’s proposed reduction of $1 billion in the NASA discretionary budget. Funding for the Space Launch System (SLS) and Orion would receive increases rather than the administration’s proposed cuts, whereas the Science Mission Directorate (SMD) would also see additional funding above the request, though still 3.5% below FY 2016 levels. Elsewhere in the bill, NSF would be essentially flat-funded from FY 2016 levels compared to a 1.3% discretionary increase sought by the administration. The National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology would both see very modest increases in overall budgets.

The Senate transportation, housing and urban development (THUD) bill (S. 2844) was also approved by committee. Surface transportation funding in the bill is consistent with the Fixing America’s Surface Transportation Act reauthorization reached last winter, according to the committee. The THUD bill now heads to the Senate floor.

Senate hearing on leveraging US federal investments in science and technology

The Senate Commerce, Science, and Transportation Committee held a May hearing to explore how a reauthorized America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Act can improve the US science and technology enterprise. The committee heard from Rob Atkinson, president of the Information Technology and Innovation Foundation; Kelvin Drogemeier, vice chair of the National Science Board; David Munson, Robert J. Vlasic Dean of Engineering at the University of Michigan; and Jeannette Wing, corporate vice president for research at Microsoft. The hearing featured questions from members on regional innovation programs that leverage science and technological advances to improve local economies; ways to improve US education; and ideas for improving coordination across federal agencies and the academic and private sectors. No specific timetable was set for introducing the legislation currently being drafted by the committee, but chairman John Thune (R-SD) stated in his opening remarks that he is “hopeful the bill will be ready in the coming days.”

Bill to improve understanding of space weather introduced

Sen. Gary Peters (D-MI) introduced legislation that follows on the White House’s recent Space Weather Strategy and would codify responsibilities of federal agencies with oversight of space weather research and forecasting, including DOD, NASA, NOAA, NSF, and the Department of Homeland Security. The legislation covers everything from clarifying that NSF and NASA should pursue the basic scientific research needed to better understand and, ultimately, predict space weather events, to other agencies’ responsibilities to provide forecasting services and assess space and ground-based infrastructure vulnerabilities to space weather events. The bipartisan bill has received praise from members of the scientific community and is moving quickly toward a markup by the Senate Commerce, Science, and Transportation Committee.

Senate committee passes SBIR/STTR reauthorization

The Senate Small Business and Entrepreneurship Committee passed a bipartisan reauthorization of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, which receive a guaranteed percentage of federal agencies’ extramural research and development budgets. Participating agencies are those that spend more than $100 million on extramural research, and the reauthorization bill would make the program permanent rather than require reauthorization every five or six years. In addition, it would increase the percentage devoted to programs from the current 3% to 6% for non-defense agencies and 5% for the Department of Defense (DOD) by 2028, and institute a suite of reforms. The bill now awaits action by the full chamber. The House Small Business Committee has also passed reauthorization legislation that includes smaller increases over a shorter time period and includes fewer program reforms. That bill is awaiting action by the House Science, Space, and Technology Committee before it can move forward. Finally, in its markup of the National Defense Authorization Act for FY 2017, the Senate Armed Services Committee included language that would make the DOD SBIR and STTR programs permanent.

Senate passes bipartisan energy policy modernization act

On April 20, the Senate passed by an overwhelming vote of 85-12 a bipartisan comprehensive energy bill that touches on many aspects of federal energy policy—from efficiency standards and programs to natural gas export authority—and includes provisions that reauthorize the DOE Office of Science and ARPA-E. The bill authorizes increased funding targets for DOE-Science and ARPA-E for the next five years and rescinds unused or unneeded program authorities initiated by the prior two America COMPETES Acts of 2007 and 2010. A companion bill passed by the House late last year started on the same bipartisan track, but the final bill is considered decidedly more partisan than the newly passed Senate bill. The House bill also does not contain research-related provisions due to the differing jurisdictions of the relevant House and Senate committees. Energy research provisions are included in the separate House-passed version of COMPETES and differ from the Senate comprehensive bill. Hence, the House and Senate bills will now need to be reconciled, which legislators plan to do via a conference committee to get to a final compromise bill to send to the president for signature.

Senator Flake releases science-focused “Wastebook”

Sen. Jeff Flake (R-AZ) released his new report at a press conference on May 10, followed by a speech on the Senate floor two days later, and several press appearances. Whereas past “wastebooks” have included scientific research among other federal spending the senator considers wasteful, this report focuses solely on scientific research, including references to several grants that are no longer active. In releasing the report, the senator argues that the government should pay more attention to how its research funds are distributed, particularly when there are pressing priorities in areas such as medical science. Citing a current lack of transparency, Sen. Flake released companion legislation that would require more specific public accounting of funds spent on each individual project supported under a grant, a proposal that runs contrary to the current bipartisan push to lessen administrative burdens on researchers.

Hill addendum

OSTP announces National Microbiome Initiative

The White House Office of Science and Technology Policy announced a collaboration between federal agencies and the private sector to take a more cooperative approach to studying the microbiome of a range of ecosystems. The National Microbiome Initiative will focus on three specific goals: support interdisciplinary basic research; develop platform technologies to share information and knowledge; and expand participation through citizen science and public engagement.

NSF releases future vision for research

NSF director France Córdova has published a list of nine ideas intended to shape the foundation’s investments in the future. The list includes six research ideas:

NSF reports increase in US graduate enrollment in science and engineering

NSF’s National Center for Science and Engineering Statistics (NCSES) released an updated report showing that the number of science and engineering (S&E) graduate students increased by 5.5% between 2013 and 2014, rising from 570,300 to 601,883. NCSES cites that much of this growth stems from a continuing increase in the enrollment of foreign graduate students on temporary visas, which grew by 7.4% between 2012 and 2013, and by 16.0% between 2013 and 2014. The report also finds that the number of S&E graduate students primarily supported by federal sources declined by 8.2% between 2009 and 2014, while those primarily on self-support increased by 26.7% during the same time period.