Forum – Summer 2017

Climate engineering

The articles in the Spring 2017 Issues by David W. Keith, “Toward a Responsible Solar Geoengineering Research Program,” and Jane C. S. Long, “Coordinated Action against Climate Change: A New World Symphony,” provide informative views into several points of current debate on climate engineering research, its management, and governance. The authors agree on several key points. They agree that it is urgent to expand research on climate engineering interventions; that research is needed on both carbon-cycle and solar options; that research must address both scientific and engineering questions; that the agenda should be driven by societal need rather than investigator curiosity; and that research should target interventions that are plausible candidates for actual use, not idealized scenarios. They also agree that research must vigorously pursue two competing aims: to identify and develop interventions that are as effective and safe as possible, and to aggressively scrutinize these to identify potential weaknesses or risks.

Their main disagreement concerns how to organize research on the two types of climate engineering: carbon-cycle and solar methods. Long argues that they should be combined, because the two approaches must be evaluated, compared, and decided jointly, together with mitigation and adaptation, to craft an effective strategic climate response. Keith argues that they should be separated, because of large differences in the bodies of scientific knowledge and technology on which they draw; the nature and distribution (over space and time) of their potential benefits, costs, and risks; and the challenges they pose for policy and governance.

A first step toward clarifying this disagreement is to note that the authors emphasize different elements of policy-making processes. Keith is mainly concerned with designing research programs. His programs are not purely scientific in their motivation and focus, in that they aim to develop and test technologies that can contribute to solving a societal problem. But they are well enough separated from policy decisions, and from the comparative assessment of capabilities, risks, and tradeoffs needed to inform decisions, that their management and funding are best optimized for each type of climate engineering separately. Long is mainly concerned with assessment and decision making. She argues that effective climate policy making must strategically consider and compare all response types, and that assessments, scenarios, and research programs must therefore also be strategically integrated if they are to usefully inform policy decisions. The authors thus agree on the need for integration of carbon-cycle and solar methods in assessment, scenarios, and policy making, but diverge on what this implies for the design, funding, and management of research programs: separate programs for carbon-cycle and solar methods, or combined?

This question turns on whether achieving successful integration in assessment, scenarios, and policy making requires integration in research program management and funding. In my view, such dependency could arise in three ways. First, integrated research would be favored if a coherent and defensible research program mission cannot be defined at the level of one response type, but only at some higher level of aggregation: as Long points out, “make solar geoengineering work” is not a suitable mission statement for a research program. Second, integration would be favored if effective assessment requires strong control over research management decisions, including allocation of resources between carbon-cycle and solar interventions. Finally, integration would be favored if research governance needs are driven less by differences in the opportunity and risk profile of different responses, and more by aggregate public or political views of climate engineering that do not clearly distinguish the two types: in this case, integration might be required as a matter of political risk management.

Edward A. Parson

Dan and Rae Emmett Professor of Environmental Law

Faculty Co-Director, Emmett Center on Climate Change and the Environment

UCLA School of Law

Jane Long makes several important points. Among them is that geoengineering research should not have as its mission the deployment of geoengineering concepts. She cogently argues that “The goal for climate intervention research must be to understand the potential efficacy, advisability, and practicality of various concepts in the context of mitigation and adaptation.” David Keith makes a similar point and provides two guiding principles: that research on solar radiation management should be part of a broader climate research portfolio on mitigation and adaptation action, and that research should be linked to governance and policy work.

We generally think of solar radiation management research in terms of small tests that can define particular parameters, such as the atmospheric residence time, transport, and fate of aerosol scattering particles. As both Long and Keith observe, these tests require thoughtful governance arrangements that may be difficult at present.

Twenty-six years ago there was a large-scale natural experiment in solar radiation management: the eruption of Mount Pinatubo in the Philippines that injected roughly 17 million tons of sulfur dioxide into the middle and lower stratosphere. Sulfate aerosols spread across the Pacific Ocean in a few weeks and around the globe within a year. Spectacular sunsets over the next two years were one indication of the stratosphere residence time of the aerosols. The event produced observed cooling in the Northern Hemisphere of 0.5 degrees to 0.6 degrees Centigrade, equivalent to a reduction in radiative forcing of perhaps 3 watts per square meter. Globally averaged cooling of approximately 0.3 degrees was observed.

Such natural experiments in stratospheric aerosol injection are infrequent. The eruption of Krakatau in 1883 produced a forcing of a little over 3 watts per square meter. There were three eruptions between Krakatau and Pinatubo that produced forcings of 1.5 to 2 watts per square meter and five additional ones of 0.5 to 1 watt per square meter. The average frequency was once every dozen years, although there was a long quiet period from about 1920 to 1963.

It seems both worthwhile and feasible to develop a program to learn from the next such eruption. Much was learned about scientific models from Pinatubo, but as the 2015 National Research Council report Climate Intervention: Reflecting Sunlight to Cool Earth stated, “More work is needed in characterizing these processes in nature (through measurements), and in modeling (through better model treatments and a careful comparison with observed features of aerosols and their precursor gases) before scientists can produce truly accurate models of stratospheric aerosols.” Understanding the chemical reactions, mixing, and particle formation after such an event can help characterize not only solar radiation management but also aerosol-forcing effects on climate. Global observations can help understand the consequences of solar radiation management on precipitation, plant productivity, and carbon uptake, among other effects.

The Climate Intervention report had a short section describing observational requirements for making better use of volcanoes as natural experiments. It points out that “our ability to monitor stratosphere aerosols has deteriorated since [Pinatubo], with the loss of the SAGE II and III satellite-borne instruments.” Both satellite systems and a deployable rapid-response observational task force (that would have other atmospheric science uses to occupy it between eruptions) are suggested.

The creation of an international program to learn from the next Pinatubo could jump-start both needed instrumentation and perhaps governance arrangements in a low-key way that could build trust and indicate whether governance of deliberate solar radiation management experimentation is feasible along the lines that Long and Keith describe.

Jay Apt

Co-Director, Carnegie Mellon Electricity Industry Center

Professor, Tepper School of Business and Department of Engineering & Public Policy

Carnegie Mellon University

David Keith issued a strong call for geoengineering research, echoing calls that I and others, including Keith, have made previously. I completely agree with him that mitigation (reducing emissions of greenhouse gases that cause global warming) should be society’s first reaction to the threat of human-caused climate change. I also agree that even if mitigation efforts are ramped up soon, they may be insufficient to prevent some dangerous impacts and society may be tempted to try to actually directly control the climate by producing a stratospheric aerosol cloud or brightening low marine clouds.

It will be a risk-risk decision that society may consider in the future: is the risk of doing nothing in terms of advertent climate control greater than the risk of attempting to cool the planet? To be able to make an informed decision, we need much more information about those risks, and thus we need a research program.

My only disagreement with Keith centers on his overall favor of eventual geoengineering implementation. I think the governance issues will be much more difficult than the examples he gives. Remember, we are talking about governing the climate of the only planet known to support life. Air traffic control or international banking do not have to be perfect, and small mistakes, even if very unfortunate for those affected, will not result in a global catastrophe. And how can we agree on how to set the planetary thermostat, with imperfect compensation for those who end up with a worse climate? How will we ever be able to attribute regional climate changes, either bad or good, to geoengineering, when natural variability is so large?

I support geoengineering research because we need to reduce the unknowns. We may discover large risks that we are unwilling to take, and the research may end up with enhanced cooperation toward rapid mitigation, with the realization that there is no safe “Plan B.”

But what about the “unknown unknowns,” as Donald Rumsfeld put it? Will the world ever be willing to take a chance on a complicated technical endeavor to control Earth’s climate, in the hope that there will be no bad surprises? Will we accept whiter skies and not being able to see the Milky Way as easily as now? Will we trust the militaries of the world to not use this new technology as a weapon? Can we live with more ultraviolet radiation reaching the surface due to ozone depletion caused by stratospheric particles?

Doing our best with the limited resources available, we are now trying to see if we can produce some new combination of materials, locations, and timing of injections of particles into the atmosphere that will produce a better climate for most. So far, we have not been successful. But it is early days, and we owe it to the world to do much more such research, while at the same time advocating for rapid reductions of emissions of greenhouse gases that are causing global warming. It will not cost much, and it is money that will be a wise investment of the governments of the world. We can’t wait.

Alan Robock

Distinguished Professor of Climate Science

Department of Environmental Sciences

Rutgers University

David Keith justifies his call for a large-scale international solar geoengineering field research enterprise in environmental justice terms. He argues that in light of the mounting evidence that emissions reductions alone may be insufficient to limit severe climate risks, beneficiaries of a research program to understand the risks and benefits of potentially deploying solar geoengineering technologies to rapidly cool Earth would include “the world’s most vulnerable people, who lack the resources to move or adapt” to rising sea levels and increasing extreme weather. Thus, the multiple “reasons for reluctance” that Keith acknowledges constrain support for solar geoengineering research must be weighed “against the evidence that solar geoengineering could avert harm to some of the world’s most vulnerable people.”

The problem is that such evidence is not established. The benefits and risks of any solar geoengineering program will be unevenly distributed across the world and nations might have widely divergent preferences for whether, when, how, and toward what ends solar geoengineering technologies should be deployed. Who would decide whether solar geoengineering is deployed to support the climate resilience goals of farmers in the Sahel or Bangladeshis if they conflict with, say, maintaining and expanding ice-free ports in the Russian Arctic?

According to Keith, a “responsible” solar geoengineering research program should “have an engineering core,” using atmospheric experiments to investigate detailed plausible operational scenarios for deployment. It would focus on assessing various researcher-determined measures of risk and effectiveness in achieving desired climate outcomes with results informing governance and policy developments.

This is not sufficient. Recent research suggests that in the absence of broader societal input and consent, even small-scale, low-risk field experiments will trigger concerns over the slippery slope to larger-scale, riskier experiments and deployment. Without the meaningful input and support from the climate vulnerable constituencies it is intended to benefit, a solar geoengineering field research program would lack much-needed legitimacy and risk significant opposition. A responsible research program needs to account for how climate-vulnerable nations and communities themselves might view the value of such a program and ensure that they are fully engaged in co-creating research and governance goals and objectives.

Thus, a responsible solar geoengineering research program should include several core elements. As a prerequisite, clear support for solar geoengineering research should be established from an international coalition of nations. This should include nations particularly vulnerable to climate change as well as high-carbon-emitting nations that are fully committed to ambitious emissions reductions. Research priorities should be explicitly codeveloped in collaboration with technical experts, social scientists, and civil society organizations from climate-vulnerable nations. Finally, an international research governance system must be designed with meaningful input from civil society to address concerns about transparency, liability, and justice.

Peter C. Frumhoff

Director of Science and Policy

Union of Concerned Scientists

Jennie C. Stephens

Dean’s Professor of Sustainability Science and Policy

Northeastern University

David Keith’s article gives rise to an interesting question about the utility of the label “responsible research” in the context of solar geoengineering. One of the central tenets of responsible research is that society, broadly defined, should have a meaningful stake in debating and modulating the direction of scientific research. In the case of research on solar geoengineering, with its inherently global impacts, the development of effective mechanisms for facilitating broad societal discussions about the desirability of this direction for research seems to be hugely important. However, this is not Keith’s focus. The notion of a genuine two-way dialogue around the desirability of research on this topic is absent: society features either as people meekly awaiting the benefits of techno-scientific intervention or as subjects to be enrolled in research projects to improve the effectiveness of the intervention.

Keith’s treatment of the so-called “slippery slope” concern (that research may generate momentum toward deployment through various mechanisms of lock-in) is particularly revealing of his understanding of the proper relationship between science and society, suggesting an expectation that research can ultimately bypass the need for societal debate and discussion. For example, he claims that a slippery slope is not a problem in itself if “research reveals that solar geoengineering works better and with less risk than we think.” But this assumes that research will be able to establish “once and for all” whether benefits outweigh risks. However, this is simply impossible: not only is there much that is likely to be unknowable about such interventions, but there are also many different disciplinary and social perspectives about what would constitute acceptable levels of risk, about the kinds of knowledge that would be necessary to answer such a question, and even about the meaning of risks and benefits themselves. Presuming that scientific research will be able to come up with a single answer and make these disagreements go away is quite simply unrealistic. There will always be multiple, contested answers to the question of whether geoengineering is on balance a good or bad idea—hence, the need for a genuinely responsible approach to research that incorporates a wide range of societal stakeholders in deciding if (not just how) this kind of research should go ahead.

By belittling concerns around the slippery slope as unfounded as long as the science shows us everything is all right, Keith reveals an overblown faith in science and a fairly dismissive attitude to the concerns that other people might bring to this debate. Despite nodding toward a number of other arguments against research, he quickly concludes that these “do not amount to a strong argument,” before promoting his own particular (and questionable) view of the benefits of research. Closing down the space for debate in this way would appear to limit the possibility for a really “responsible” attitude toward any potential research in this area.

Rose Cairns

Research Fellow, Science Policy Research Unit

University of Sussex

Brighton, United Kingdom

By using the adjective “responsible” in the title of his article, David Keith points to a dilemma: responsibility goes forward and backward. In the case of solar geoengineering, there’s the forward-looking “move by humanity to take deliberate responsibility for managing the climate,” as Keith puts it, which can be viewed most generously as the caretaking or stewardship responsibility for creating conditions in which life can flourish. But there’s also the backward-looking taking of responsibility for past actions that created the situation, the “cleaning up our mess” part, which mingles with accountability and liability. Forward-looking responsibility is entangled with agency; backward-looking responsibility is entangled with causality and blame.

Keith points toward five reasons why people are reluctant to form a solar geoengineering research program: uncertainty, slippery slope, messing with nature, governability, and moral hazard. But there’s also a sixth: the notion that solar geoengineering represents an avoidance of responsibility. As one of the people interviewed as part of my studies of perceptions of solar geoengineering put it, “It’s like transferring the responsibility from myself to somebody else in tackling climate change.” There’s a transference of agency here, as well.

Who can take that backward-looking responsibility? Scientists and researchers can’t do much about this on their own, and the intense debate about “loss and damage” in the climate regime belies the difficulty. There’s no real social process for responsibility-taking on the scale of global climate change. The best that we have is the Common but Differentiated Responsibilities and Respective Capabilities principle included within the United Nations Framework Convention on Climate Change to acknowledge the different capabilities and differing responsibilities of individual countries in addressing climate change. Fossil-fuel companies, the states that subsidized them, and the citizens of rich nations who burned the carbon and benefitted from it all deserve some share of responsibility. But instead of putting a price on carbon, the US government subsidizes it—irresponsibly.

The dilemma is that a research program itself can’t be fully responsible as an independent, self-organized entity. The context is what makes it so. Right now, the context is one of extreme irresponsibility. Research based in the United States will be “responsible” only if the state and corporations are making attempts to curb the harm, recognize past harms, change everything. So what’s a researcher to do? Best guess: listen, be responsive, align with researchers around the world, and support them in taking their research in the directions they want it to go. Recognize and name whenever possible the irresponsibilities and asymmetries, rather than speaking of a common humanity that’s created the mess and now has the responsibility of repair. Prospects of actually governing this technology, like the prospects for governance of climate change, may depend upon such recognition. It’s beyond the common purview of science to take responsibility for more than forward-looking science or its outcomes, but these are extraordinary times.

Holly Jean Buck

Department of Development Sociology

Cornell University

David Keith provides a useful provocation for thinking about the intersections of science and society in the context of solar geoengineering research. What does responsibility mean, and for whom? Keith’s notion of responsibility seems to entail more “transparent” research on solar geoengineering to enable responsible decision making. To this end, he lays out some key issues (though certainly not all) raised by the prospect of solar geoengineering research, and he suggests that they are amenable to resolution through the provision of more science. However, a different account of the relationship between science and politics opens up a set of questions that he doesn’t address. The question of the “responsibility” of a decision—or a research program—is not just a matter of scientific facts, but of values, interests, and context. This raises important questions about the relationship between science and policy, the potential distributional implications of innovation, the role of ignorance and uncertainty, and the importance of public engagement.

Keith argues that an international research program on solar geoengineering—one that is linked to, but distinct from, research on carbon dioxide removal approaches (see Jane Long’s counterpoint to this claim for separation)—is urgently needed for societies to effectively manage climate risks, especially for “the world’s most vulnerable people.” But this argument demands further scrutiny. Keith seems to argue that by virtue of his expertise he knows what matters to vulnerable people, and that solar geoengineering research will benefit them. Scientists frequently make these kinds of claims, but as the British researcher Jack Stilgoe has pointed out, the history of technology suggests that many sociotechnical systems tend to exacerbate the gap between rich and poor, rather than close it. If we want to treat this as an empirical question, we might, at the very least, develop mechanisms to ask people who are indeed vulnerable if they want solar geoengineering research to move forward on their behalf.

Keith also argues that uncertainty alone is not a sufficient reason to oppose research, because “the central purpose of research is to reduce uncertainty.” However, this view of uncertainty may miss the mark in at least two ways: it misunderstands opposition to research, and it seriously overestimates the ability of science to resolve controversies about technology and risk.

With regard to the first point, for some opponents of research, ignorance is not only an option, but the right option. There are certainly some areas of innovation that, for better or worse, societies have chosen not to pursue (for example, human cloning). An “ignorance is not an option” rationale for research could have the effect of limiting social choice in problematic ways, and it implies a level of inevitability about innovation that is not obvious. Debates over whether or not to move forward with solar geoengineering research will tend to depend on how people perceive the purposes, values, and risks of research, which is not at all a straightforward proposition answerable by more science.

On the second point, as Arizona State University professor and writer Daniel Sarewitz has argued, persistent debates about genetically modified organisms, nuclear power, and chemical toxicity evince that science often does little to limit controversies—and can sometimes make them worse. Uncertainties in these domains often resist scientific reduction, more science does not always tell us how to act wisely, and partial knowledge can create excess confidence that action is worth taking. Promises that more research in complex areas will reduce uncertainties, and that this will compel political or policy action, should be met with healthy skepticism.

Certainly, many of these concerns extend well beyond the emerging domain of solar geoengineering research, including into climate change science and politics more generally. However, this isn’t a reason to sidestep thorny questions at the heart of science policy. Experience suggests that neither Keith nor any other expert has the political privilege of determining what “responsible” approaches to solar geoengineering might be. Democratic deliberation, not expert monopoly, should lead the way in discussions of the future (or not) of research in solar geoengineering.

Jane A. Flegal

Doctoral Candidate

Environmental Science, Policy, and Management

University of California, Berkeley

Why carbon capture is not enough

The world hasn’t been very successful at dramatically reducing carbon dioxide emissions with existing technologies, so what could be wrong with a proposal to reframe climate change in order to make carbon capture a more feasible solution? In fact, investing in a broad suite of technologies to mitigate climate change is critical. But the reframing of the problem proposed by Klaus S. Lackner and Christophe Jospe in “Climate Change is a Waste Management Problem” (Issues, Spring 2017) highlights a serious misunderstanding of the reasons why stopping climate change has been so difficult.

Their main argument is that framing carbon emissions as a waste management problem akin to trash or sewage disposal, rather than as a typical pollution problem, will cut some Gordian knot. But it’s precisely because carbon dioxide is not like a typical waste problem that people have not been more motivated to find solutions.

With a waste problem such as garbage or sewage, the impact on your personal well-being and health is immediate and very tangible. If your home has trash and raw sewage piling up, you will be affected by the sight and smell very quickly, as well as face an increased risk of getting sick. But regarding carbon dioxide, we exhale it 24 hours a day, it cannot be seen or felt, and in reality it doesn’t have any immediate effect on public health or personal well-being. Even for the longer-term effects of climate change, most people won’t viscerally feel them. For example, a recent poll by Yale Climate Opinion Maps found that roughly 60% of people in the United States were concerned about global warming, but only 40% thought it would harm them personally. Moreover, we don’t know how many of those 40% would be willing to pay to prevent harm, with economic surveys suggesting that most US residents aren’t willing to pay the full social cost of carbon.

More important, paying to reduce your own personal carbon emissions doesn’t actually prevent you from bearing the effects of climate change, since it’s a global problem. If you want to pay to protect yourself from direct effects, you might buy homeowners insurance or move out of areas prone to natural disasters. But unlike with sewage treatment, you cannot pay for local climate mitigation that will clearly benefit you.

While Lackner and Jospe give some rough estimates of the cost for carbon air-capture technology and make optimistic promises that the cost will come down, they give no estimate of the cost or feasibility of storing the carbon. They do note that all storage technologies besides geologic storage are too expensive or impractical. Yet large-scale geologic storage has begun to be used at only a few sites and only in the past two to three years. Will the carbon stay underground? Is the technology safe? Is it affordable? Will the public trust it? We have no idea.

The authors repeatedly insist that a major benefit of the waste framework is that it “does not require top-down coordination and management.” But in most developed countries, all other disposal systems, such as for trash and sewage, are run entirely by the state and are affordable only because they are mandatory.

They also state that “Nobody can buy a house today without a sanctioned method for sewage handling, and household garbage must be properly disposed of.” This ignores the fact that 60% of the global population lacks access to flush toilets or proper sewage disposal. Even though the immediate benefits of sewage systems are clear, they are still unaffordable to a majority of the world’s population.

All solutions to climate change have their shortcomings. Most important, air capture of carbon dioxide doesn’t solve all of the other detrimental effects of energy production on public health and the environment, such as land use change and air and water pollution. Air capture may eventually be an affordable way to remove carbon dioxide (and maybe other greenhouse gases) from the atmosphere, but it does nothing to keep heavy metals, nitrogen and sulfur oxides, or coal ash produced during the energy-production cycle from entering air and water supplies. Developing and expanding clean energy sources such as nuclear and renewables, improving energy efficiency, and driving electric vehicles do reduce these other environmental impacts, which are in many ways much more tangible and immediate concerns to the public.

For carbon capture to work, it will need a better business model than relying on wealthy elites to voluntarily pay for their waste streams. The authors hint that there may be ways to make money from using the carbon, and that seems like a more feasible commercialization pathway for carbon air-capture technology.

We will almost certainly need carbon capture and storage as part of the solution to deep decarbonization. But as long as we’re reframing climate change, we should do so in a way that actually makes the solutions more feasible, not less.

Jessica Lovering

Director of Energy

Alex Trembath

Communications Director

The Breakthrough Institute

Oakland, California

Making big science decisions

In “Notes from a Revolution: Lessons from the Human Genome Project” (Issues, Spring 2017), David J. Galas, Aristides Patrinos, and Charles DeLisi highlight a chronic flaw in US science policy making that results in missed opportunities, inefficiencies, and in some cases wasted federal resources. The flaw is that the government has no reliable mechanism to plan and execute large scientific projects when they involve several federal agencies.

Individual executive departments and agencies have been extraordinarily successful over many decades in planning and executing large projects. I’ll mention only three. The Department of Energy (and its predecessor agencies) has built world-class instruments to study atomic nuclei and elementary particles as well as light sources for use by many fields of biological and physical sciences. The National Aeronautics and Space Administration has built and launched hundreds of instruments to study the solar system and the broader cosmos. The National Science Foundation has deployed a variety of ground-based and orbiting instruments to probe far into distant space (including the Laser Interferometer Gravitational-Wave Observatory that in 2015 made the first direct observation of gravitational waves created by colliding black holes) and launched innovative research ships to study the oceans from pole to pole and at the greatest depths.

That said, the authors are correct in calling attention to the “need for a rigorous but flexible process to evaluate large-scale transformative proposals” that significantly affect several fields and federal agencies for all the reasons the authors give. Inside the federal government, this is a job for the White House Office of Science and Technology (OSTP) and its director, who also serves as the president’s science advisor. However, it is a small agency with no authority over budget matters. Its role is strictly advisory. The National Science and Technology Council (NSTC), chaired by the president, and its coordinating committees provide an important mechanism for interagency planning. But OSTP officials and NSTC members—cabinet secretaries and heads of research agencies—move on at the end of an administration, or even sooner. What is needed is a mechanism outside the federal government that has continuity and credibility and can engage the research communities—universities, national laboratories (federal and private), and industrial labs—in assessments of needs, evaluation of options, and strategic planning for federal agencies and other partners, domestic and international. One possible model for better planning and coordination of research activities is described in an earlier Issues article by Gary E. Marchant and Wendell Wallach, “Coordinating Technology Governance” (Summer 2015).

The authors of the present article suggest that the National Academies of Sciences, Engineering, and Medicine could take this on. Their decadal reports—for example, in astronomy and astrophysics—are influential in setting priorities for whole research fields. Even though the charter of the National Academy of Sciences, under which all the academies operate, and the Executive Order creating the National Research Council restrict the activities of the Academies, they could play a coordinating role, collaborating with several science, engineering, and medical research nongovernment organizations to establish an entity of some kind to take on this difficult job. Many of the challenges to the US research enterprise, including support of high-risk transformational research and innovative university-industry-government partnerships, have been described in several reports of the American Academy of Arts and Sciences. Perhaps a study by the National Academies that focuses on new mechanisms for long-range strategic planning of large interagency activities (including facilities and programs) in cooperation with nonfederal partners could flesh out the possibilities.

Neal Lane

Senior Fellow in Science and Technology Policy

Baker Institute for Public Policy

Rice University

Former presidential science advisor and director of the National Science Foundation

Measuring research benefits

With “Are Moonshots Giant Leaps of Faith?” (Issues, Spring 2017), Walter D. Valdivia has joined the distinguished ranks of science and technology policy analysts who have written eloquent explanations of why ex post evaluation of research and development (R&D) investments is so difficult, if not impossible, at any but the highest levels of aggregation. He poses an interesting question: whether abnormally large increases in government-funded R&D program budgets, which he calls, somewhat infelicitously, “moonshots,” yield proportionately large benefits. He then details many of the reasons we are not generally able to analyze the benefits of more routine R&D budgets, never mind those that receive large injections of new money in a short time.

Though one might quibble with one or two of his claims, the overall thrust of his article is right on point. Quite naturally, citizens, politicians, and all manner of experts would like to be able to quantify the benefits that result from our huge public (and private) investments in R&D. There are good reasons for asking this question about the aggregate R&D budget as well as about various parts of it, right down to the level of the individual research project and the individual researcher.

Unfortunately, as Valdivia nicely demonstrates, we can’t provide a straightforward and fully satisfying answer to the benefits question at any level. At best, we can examine various surrogates, indicators, partial measures, and indirect hints to try to get some empirical purchase on the answer. In keeping with Valdivia’s final claim, at the end of the day there is still no substitute for informed expert judgment, with all its biases and aided by the available inadequate measures, to tell us both what got from past R&D investments and what we might get from future ones.

Christopher T. Hill

Professor of Public Policy and Technology, Emeritus

Schar School of Policy and Government

George Mason University

Walter Valdivia provides a good summary of the literature on the effects of science on society at three levels: (technological) innovation, knowledge, and research organization. His views have been well known for decades. He cites the difficulty of measuring the links between research and economic growth, the limitations of publication and citation counts, and the limited administrative capacity for making enlightened choices in promising fields.

Valdivia’s recommendations, however, do not cover the full scope of his criticisms. His discussion is essentially concerned with technological impacts, but it does not address the full array of impacts, particularly those less quantifiable, such as cultural impacts. Neither does he discuss the negative impacts of the application of science. He does suggest that the “full array of means by which knowledge production meets people’s needs” should be considered, but that is all. Valdivia calls for investments in administrative capacity, in general-purpose technologies, for specific goals, and he calls for agencies to pool their political capital for greater effect.

I think it is time to articulate the issue of science and society in totally new terms. A new paradigm and, above all, a new discourse are needed. First, we must admit that social scientists have never managed to produce the evidence necessary to demonstrate a link between science and society (although we all believe intuitively that there is such a link). Second, we (scientists and their representatives) still defend science publicly based on a decades-old discourse. Yet we have never convinced policy makers with a discourse on social and economic impacts, because “science and technology funding is more likely to be increased in response to threats of being overtaken by others (Sputnik, Japan, Germany, now China) than it is to respond to the promise of general welfare or eventual social goods,” as Caroline Wagner said on the National Science Foundation’s Science of Science Policy Listserv.

I have no ready answers as to what this new discourse should be, although training of students certainly should be a central part of it, and knowledge as a concept should be less abstract than it is now. One thing I am sure of is that the scholarly analyses and the public discourses of scientists have to make a tabula rasa of everything we have long assumed. Everyone proclaims the linear model, in which all innovation begins with basic scientific research, is dead, but in fact it is still alive and kicking. The issue is not whether the model (and its many variants under different names, such as the chain-linked model) is right or wrong, but that it is not the appropriate “marketing” tool to sell science to the public. Today, innovation has taken the place of research as a cultural value responsible for growth and welfare, and research has very few hearings in the discourse of progress. For better or worse, scientists have to take this into account.

Benoit Godin

Professor

National Institute of Scientific Research

Montreal, Quebec, Canada

Bats and human health

In “Give Bats a Break” (Issues, Spring 2017), Merlin D. Tuttle argues that limited scientific evidence supports the degree to which the media sensationalize the role of bats as hosts of significant human viral pathogens other than rabies-causing lyssaviruses. He is correct in assessing the total annual number of human deaths due to bat-borne viruses as low. And like him, I am appalled by the bad reputation that bats have received over the past decade based on limited or misinterpreted scientific data, leading to measures to destroy entire bat populations for no reason. Tuttle emphasizes his frustration with the unanswered question: why are there so few outbreaks of highly lethal diseases caused by coronaviruses or filoviruses every year given the abundance and geographic distribution of their presumed bat hosts? Indeed, my favorite phrase in his article is: “small samples have been mined for spurious correlations in support of powerful pre-existing biases [in regard to bats], while researchers ignored evidence that pointed in the opposite direction.”

However, Tuttle takes the pendulum and swings it too far into that opposite direction. He correctly cites my speculation that arthropods or fungi could be the hosts of Ebola virus. This statement, however, does not mean that I am certain that bats have to be excluded from the Ebola virus host search. Although no evidence unambiguously supports bats as harboring Ebola virus, scientific data suggest that bats may be exposed to this virus on a relatively regular basis. Thus, an arthropod or fungus on a bat may be the Ebola virus host—and to examine such a hypothesis, bats would have to be sampled.

Tuttle also minimizes the fact that Marburg and Ravn viruses, very close relatives of Ebola virus and equally if not more lethal to humans, have been isolated repeatedly from Egyptian rousettes, or Egyptian fruit bats, sampled in caves associated with human deaths. In experimental settings, these bats can be subclinically infected with Marburg viruses, and the infected bats shed the viruses orally and in their excreta for sustained periods. Further, under experimental conditions, these bats have been shown to transmit the viruses to other bats. Thus, though it’s possible that Egyptian rousettes may not be the major host of Marburg viruses, the bats certainly are a host of all known Marburg viruses and therefore their role in disease transmission ought to be studied.

Tuttle is right about MERS coronavirus being harbored in dromedary camels rather than in bats, as was hypothesized when the virus was discovered. However, he omits the accumulated scientific evidence that this virus is nested deep in a branch of bat-borne coronaviruses on the coronaviral phylogenetic tree. The question is not only from where a human contracts a virus, but also how this virus emerged. The current scientific evidence strongly points to a bat-dromedary camel transmission event in the past—and this hypothesis then brings forth the question: under which circumstances do bat viruses evolve to become human health threats? Consequently, the phrase used to introduce the article, “Searches for new viruses in bats are unlikely to contribute substantially to human health,” should not have been used.

Ultimately, the correct path lies somewhere in the middle: scientific exploration of the bat virome and the role of bats in human disease ought to be performed in the least disruptive and destructive manner possible. The incredibly important role of bats in mosquito control and plant pollination ought to be taught more effectively than in the past, and scientific sensationalism of any kind ought to be stamped out. Still, a single introduction of Ebola virus into the human population in 2013 ultimately led to more than 11,000 human deaths. Thus, if bats were involved in this unlikely, typically rare, and yet very impactful event, shouldn’t we have an eye on them?

Jens H. Kuhn

Virology Lead (Contractor)

National Institute of Allergy and Infectious Diseases Integrated Research Facility at Fort Detrick

Frederick, Maryland

It has been more than 20 years since a new war on bats has been waged. In its current form, the new outbreak is being waged primarily by scientists, but it has been picked up by decision makers and even sometimes the public, leading to a series of misunderstandings, myths, unsupported statements, and partial truths that have been interwoven to present a picture that bats are the most dangerous, filthy, pathogen-harboring organisms on earth. Few voices are rising in defense of bats, and Merlin Tuttle, speaking through his article, provides one of the most prominent, presenting real evidence against the case.

I concur with his arguments one by one. The alarmistic tone employed every time a “new” emerging disease is reported makes it sound as if that is the end of civilization—but that is very far from the truth. On the basis of conjectures and misinterpretations of inexistent evidence, bats are blamed time and again, from Ebola to SARS to MERS. By knowingly, intentionally attaching the adjective “deadly” to a virus, the alarm is raised even more. And once the alarm is raised, health officials and other government leaders start paying attention and obviously more money is thrown at the “deadly problem.”

Furthermore, the emerging infectious diseases community is knowingly and intentionally promoting this false, unfair, destructive reputation of bats. Viruses and bacteria themselves are unfairly treated. The overwhelming majority of viruses and bacteria are beneficial, and the very balance of life on earth depends on their presence and interactions with other living things. I can draw on a number of lines of research to support this case. For example, it has been learned that one milliliter of seawater contains as many as 10 million viral particles, yet no one is saying we should dry up the ocean. Similarly, one kilogram of marine sediment contains one million different viral sequences, and no one is fighting to keep humans away from the sea. Finally, the human navel has been found to contain at least 2,368 bacterial phylotypes. If we employed the same rhetoric and flawed reasoning that Tuttle points out, the consequences would be devastating for the ocean, for our lifestyles, and for our belly buttons.

So it is time to set the record straight and let bats be what they are: some of the most beneficial organisms on the planet for human and natural interests equally.

Rodrigo A. Medellin

Institute of Ecology

National Autonomous University of Mexico

Mexico City

During the past decade, concern about the role of bats on spreading diseases has increased ferociously due to the last SARS and Ebola outbreaks. I will not repeat the multiple facts that Melvin Tuttle has already provided to counter claims that are unsupported by robust empirical evidence, rising concern for the future of bats. Unfounded fear can result in excessive demands for wildlife disease management, with detrimental results such as weakened legal protection for animals and unnecessary animal deaths.

Human societies have been transforming the landscape of the planet so intensely that we are now living what we call “Global Change,” which includes a massive destruction and fragmentation of natural habitats, the elimination of numerous species, and a decline in many ecosystem services on which we rely. This new situation is now posing numerous challenges, including some threats to human health. And this is the point where bats become part of the story. Unfortunately, as Tuttle mentioned, they are continuously identified as the main virus reservoirs and described as an extraordinary threat for human health even though the evidence of their role is often open to scrutiny.

Research on this topic should be sensitive to the fact that human-bat relationships are extremely complex, involving factors ranging from the importance of ecosystem services to the myths, legends, and fears surrounding bats. This affects not only what research is performed but how it is communicated to the public.

Although further research to assess the real disease risk is advisable, greater attention must also be paid to science communication to avoid misinformed risk perception that could undermine long-term conservation efforts. Whereas fear is easy to create and difficult to eliminate, it requires time and persistence to inculcate love and respect for nature. Thus, in any publication, scientific or not, it is not enough to superficially mention some of the ecosystem services bats provide. Benefits need to be given enough attention to provide a comprehensive picture of the human-bat relationship

We should never forget the lasting consequences of our messages and how the journalists/public will interpret our words. In a world experiencing the rise of social media as the most powerful tool for science communication, it is time for scientists to make an extra effort to consider the social implications behind our discoveries. We can no longer ignore the public response.

Adrià López Baucells

PhD student in bat ecology and conservation

University of Lisbon

Portugal

Boundaries for biosecurity

In “Biosecurity Governance for the Real World” (Issues, Fall 2016), Sam Weiss Evans offers three plausible ways to correct poor assumptions that frame so-called “dual-use research of concern.” I want to focus on one of these ways: that security itself should not be considered
in isolation from the broad range of values that motivate the quest for knowledge.

Much of dual-use research of concern touches on biodefense research: research to prevent a naturally occurring or intentionally caused disease pandemic. Indeed, much of the appeal of the 2011 avian influenza studies that Evans discusses reduces to claims about the value of this research in saving lives that may be taken in the future by influenza. In saying this, advocates of such research point out, I think correctly, that security is best taken as a broad appeal to protecting value, such as the value of human life, against loss.

This suggestion is a heresy for biosecurity and biodefense. By heresy, I mean an idea that runs contrary to established doctrine. That isn’t intended as a critique of Evans—indeed, the intent is quite the contrary. The idea stands as an invitation to consider the political philosophy of science and to view security in the context of a range of other values.

The heresy emerges because the unspoken calculation that endures behind dual-use research of concern assumes that it is, on balance, worth pursuing. To echo the National Academies’ 2003 report Biotechnology Research in an Age of Terrorism, often referred to as the “Fink Report,” modern virology has given us great benefits. But as Regina Brown and I argued in “The social value of candidate HIV cures: actualism versus possibilism,” published in 2016 in the Journal of Medical Ethics, these benefits are at best incompletely realized and often poorly distributed. A large portion of the world’s poor lacks access to modern biotechnology, and the future does not promise a positive change in this disparity. Even in the United States, the significance of different threats to human health and well-being—to the security of human health against loss—are stratified between the research haves and have-nots in ways that don’t reflect the average person’s lived experience. We live in a world where Americans lose as many life years annually to suicide or migraines as they do to HIV/AIDS, yet as my research has found, these diseases differ in one key institutional driver—funding—by more than a hundredfold.

None of this is to suggest that we should abandon influenza research, which would surely cost many lives by delaying the development of vaccines and therapeutics against a deadly infectious disease. There is more to pursuing knowledge, moreover, than saving lives. But the upshot of Evans’s analysis is that we always restrict life-saving science: the unspoken calculation is always whose life we save with research.

The most recent deliberations on dual-use research of concern, conducted by the National Science Advisory Board for Biosecurity, made headway into this heresy by claiming that there are some types of research that are, in principle, not worth pursuing because the potential risks do not justify the benefits. Left undiscussed was whether the institution of science is adequately structured to promote human security. Evans calls attention to this heresy in biosecurity debates, and I sincerely hope people engage this matter thoughtfully.

Nicholas Evans

Department of Philosophy

University of Massachusetts, Lowell


Correction

The article “Seventeen Months on the Chemical Safety Board” by Beth Rosenberg in the Summer 2016 edition of Issues contained several errors. The public hearing said to have taken place in October 2014 actually took place in January 2014; and the public hearing in Richmond, California, said to have taken place in February 2013 actually took place in April 2013 (there was no February 2013 public meeting). A complete transcript of the April 2013 meeting is available on the Chemical Safety Board’s website (http://www.csb.gov/assets/1/19/0503CSB-Meeting.pdf). Also, the article misstates how National Transportation Safety Board (NTSB) leaders are selected. The president appoints members to five-year terms and chooses a chair and vice-chair to serve for two-year terms. Tradition at NTSB is that the president seeks the consent of the other board members when deciding whether to extend the terms of the chair and vice-chair. These errors have been corrected in the online version of the article. In addition, one of the editors of Issues, Daniel Sarewitz, is the brother-in-law of the author of the article. The article meets the standards for publication in Issues.

Cite this Article

“Forum – Summer 2017.” Issues in Science and Technology 33, no. 4 (Summer 2017).

Vol. XXXIII, No. 4, Summer 2017