Are cops on the science beat?
In “The Science Police” (Issues, Summer 2017), Keith Kloor alleges that self-appointed sheriffs in the scientific community are censoring or preventing research showing that the risks from climate change are low or manageable. His complaint draws support from scientific articles that, he claims, suggested that “a main avenue of climate research (natural variability) should be ignored” and that discouraged climate scientists from investigating a recent phenomenon often identified as a “pause” or “slowdown” in the rate of global warming.
We authored those articles, and we stated the exact opposite. Contrary to Kloor’s fabricated claim, we encouraged research on natural climate variability, including the recent alleged slowdown in warming.
The idea that warming has paused or stopped originated with contrarian opinion articles in the media—rather than in the scientific literature—but it was picked up by researchers and assumed the status of a significant scientific phenomenon. To date, more than 225 articles have been published on the issue.
When the recent slowing in warming rate is subjected to thorough statistical analysis, a number of articles—including ours—concluded that the data do not justify the notion of a pause or hiatus in global warming. Warming clearly slowed at some point early during the twenty-first century, in the same way that warming accelerated at other times, such as during the past four to five years, but it never stopped or paused in a statistically meaningful sense. Thus we argued that the terms “pause” and “hiatus” were misleading.
That said, it would be impossible to draw any conclusions about a pause or its absence without research on the nature and causes of global temperature fluctuations. That is why one of our articles cited by Kloor contained an entire section titled “the merits of research on the pause.” This section noted that “The body of work on fluctuations in warming rate has clearly contributed to our understanding of decadal variations in climate.” We went on to specify some achievements of that research.
In another publication to which Kloor refers, we stated unambiguously that “Our conclusion does not imply that research aimed at addressing the causes underlying short-term fluctuations in the warming trend is invalid or unnecessary. On the contrary, it is a legitimate and fruitful area of research.”
We were, however, concerned about the way the slowdown in warming gained a foothold in the scientific literature under the label “pause” or “hiatus,” without much statistical support. We argued that this might have arisen as a consequence of the “seepage” of climate denial into the scientific community. That is, we argued that although scientists are trained in dealing with uncertainty, there are several psychological reasons why they might nonetheless be susceptible to contrarian argumentation, even when scientists are rebutting those arguments. For example, the constant mention of a “pause” in political and public discourse may cause scientists to adopt the term even if its meaning is either ill-defined or inappropriate, and even if the notion has little statistical support.
Far from discouraging scientists from pursuing any particular line of research, our work provided pointers to assist scientists in avoiding rhetorical traps set by politicians and political operatives in the future.
As we have shown elsewhere, contrarian discourse about climate science is incoherent and misleading, and suffused with rhetorical tools aimed at disparaging climate science and climate scientists. Kloor’s fanciful specter of a “science police” was partly based on claims about our work that were reversals of what we actually said. The science police concocted in this way is thus another rhetorical tool to discredit those who defend the boundary between science and pseudoscience or politics. The science police label facilitates the kind of seepage we recently observed relating to the “pause.” Why a respected journal such as Issues in Science and Technology chose to publish such misrepresentation without fact-checking is a topic for further discussion.
Keith Kloor raises important concerns, but is not able to arrive at a clear conclusion about what, if anything, ought to be done about those problems. Though he presents a handful of alarming anecdotes, he cannot say whether these represent the exception or the rule, and it makes a difference whether scientific discourse mostly works, with a few glaring exceptions, or is pervasively broken.
Real life does not distinguish as clearly as Kloor attempts to do between scientific and ideological considerations. The article by Roger Pielke in FiveThirtyEight, which Kloor discusses at length, certainly attracted ideological responses, but it was also widely criticized on scientific grounds. Pielke and his critics, such as William Nordhaus, have published arguments for and against in peer-reviewed journals. For those who sincerely believe that someone’s methods are deeply flawed and his or her conclusions factually incorrect, it is not an act of censorship, but of responsible peer review or journalism to not print that work. To do otherwise risks contributing to the phenomenon Maxwell Boykoff and Jules Boykoff call “Balance as Bias.”
However, drawing a bright line between responsible policing for accuracy and irresponsible policing for ideological purity is often impossible. In The Fifth Branch, Sheila Jasanoff distinguishes “research science” (narrowly disciplinary with strong consensus on methods) and “regulatory science” (intrinsically interdisciplinary, with experts holding diverse views about the soundness of methods and also strong political views). In regulatory science, such as research on climate change, what some see as purely scientific judgment that certain work is shoddy may seem politicized censorship to others.
In 1980, Alan Manne and Richard Richels found that expert engineers’ political views about nuclear energy strongly influenced their scientific judgments about apparently unrelated factual questions.
In Science, Truth, and Democracy, Philip Kitcher considers whether some scientific questions, such as hereditary differences in intelligence, ought not to be pursued because of the potential for even solid empirical results to be misused politically. Kitcher argues that certain research ought not to be done, if it is likely to cause more harm—through political abuse of its results—than good. However, he also recognizes that censorship would likely cause even more harm than the research. He concludes that the question whether to undertake a potentially politically dangerous line of research should rest with the conscience of the researcher and not with external censors, however well-intentioned.
It is important to keep outright falsehoods out of journalism and the scientific literature. Creationism and fear-mongering about vaccine safety do not deserve equal time with biological and medical science. But in matters of regulatory science, where there is not a clear consensus on methods and where it is impossible to strictly separate factual judgments from political ones, the literature on science in policy offers strong support for keeping discourse open and free, even though it may become heated. But it also calls on individual scientists to consider how the results of their research and their public statements about it are likely to be used.
Imagine you’re a postdoctoral researcher in a nonempirical discipline, and you draft a paper with conclusions contrary to the dominant normative beliefs of most of your field’s senior scholars. Despite your confidence in the article’s rigor, you may be apprehensive about submitting, lest it get mired in critical peer review or—even worse—you develop a reputation harmful to your career. The safe course would be to remain within the boundaries of the accepted range of discourse, not submitting the draft.
The evaluation—actual or merely feared—of scholarship based in part on its congruency with prevalent assumptions and politics has led to accusations that some nonempirical disciplines are vulnerable to groupthink and cycles of fashionable leading theories. This is troubling, both because of the increasing political homogeneity of these fields’ members and because output in these subjects can improve the understanding of society
Researchers in empirical fields may believe that their research is not so vulnerable. Of course, empirical research is generated and assessed by humans with prejudices and desires, both conscious and unconscious, and has never been fully immune to biases and recalcitrant dominant paradigms. Yet political debates can and sometimes do infringe on scientific processes in ways that are more systematic and widespread, and carry greater risks, than the occasional biases of individual researchers and reviewers.
Keith Kloor draws attention to such a phenomenon in the natural sciences. He describes how the vigilant monitoring of science for its political implications is strongest in “highly charged issues,” especially climate change. One way in which such policing of the climate change discourse is most evident is the expanding application of the “climate change denier” label. In recent years, the smear has been levied at a growing cohort: from those who are appropriately skeptical of some conclusions within climate science; to those who emphasize the high expected costs of dramatically abating greenhouse gas emissions; and to those who note that nuclear power is an essential, reliable, scalable, zero-carbon energy source.
If Kloor is right—and I believe he is—about this Manichean, with-us-or-against-us approach, then papers in line with peers’ political views will be gently reviewed, whereas those outside of them will be more rigorously scrutinized. The obvious consequence will be lower quality scientific output. In the case of climate change, this will likely also result in suboptimal decision making and policies.
The less obvious risks are political. Those who have been labeled deniers may find a more conducive audience for their conclusions among those who fully reject anthropogenic climate change, strengthening the latter constituency. Those who oppose policies to prevent climate change may take advantage of climate scientists’ internecine fractures. Those who approach climate change as novices may be discouraged by the dogmatism.
Given the stakes to humans and the environment, I believe that scientists are obligated to develop lines of inquiry, conduct research, submit articles, and conduct peer review as free as possible from ideological boundaries.
In “Publish and Perish” (Issues, Summer 2017), Richard Harris has performed a valuable service by exploring some of the problems currently afflicting science. He identifies academic pressures to publish in high-impact journals as an important driver of the so-called reproducibility crisis, a particular concern in the life sciences. We agree with this assessment and add that these issues reflect problems deep in the culture of science.
Today, a junior scientist is more likely to have a promising career if he or she has published in a high-impact journal a paper in which the conclusions are wrong (provided that the paper is not retracted) than another scientist who has published a more rigorous study in a lower-impact specialty journal. The problem lies in the economy of contemporary science with rewards that are out of sync with its norms. The norms include the 3Rs: rigor, reproducibility, and responsibility. However, the current reward system places greater value on publishing venue, impact, and flashy claims. The dissonance between norms and rewards creates vulnerabilities in the scientific literature.
In recent years we have documented that impact is not equivalent to scientific importance. As Harris observes, some of the highest impact journals have had to retract published papers as a result of research misconduct. When grant-review and academic-promotion committees pay more attention to the publication venue of a scientific finding than to the content and rigor of the research, they uphold the incentives for shoddy, sloppy, and fraudulent work. This flawed system is further encouraged and maintained by top laboratories that publish in high-impact journals and benefit from the existing reward system while creating a “tragedy of the common” that forces all scientists to participate in an economy where few can succeed. The perverse incentives created by this process threaten the integrity of science. A culture change is required to align the traditional values of science with its reward system.
Nevertheless, the problems of science should be viewed in perspective. Although we agree that reforms are needed and have suggested many steps that can make science more rigorous and reproducible, we would emphasize that science still progresses even though some individual studies may be unsound. The overwhelming majority of scientists go to work each day determined to do their best. Science has improved our understanding of virtually every aspect of the natural world. Technology continues its inexorable advance. This is because, given sufficient resources, the scientific community can test preliminary discoveries and affirm or refute them, building upon the ones that turn out to be robust. The ultimate success of the scientific method is sometimes lost in the hand-wringing about poor reproducibility. Scientists test each new brick as it is received, throwing out the defective ones and building upon the solid ones. The ever-growing edifice of science is therefore sturdy and continually reinforced by countless confirmations. Although there is no question that science can be made to work better, let us not forget that science still works.
The case that Richard Harris presents in his article and in his damaging book suffers from three significant problems.
First, the wrong question is being asked referring to statistics about how many results are not ultimately supportable. It’s like asking how many businesses fail versus the number that succeed—far more fail, of course. Does that mean people shouldn’t start new businesses? Does that mean that there must be better ways to start businesses? Do we expect to have a foolproof, completely replicable method of starting a business? Of course not. Science is a process riddled with failure; it is not just a step along the way to eventual success, but a critical part of the process. Science would come to a dead stop if we insisted on making it more efficient. Messy is what it is, and messy is what makes it successful. That’s because it’s about what we don’t know, remember.
Second, those results that turn out to be “wrong” are wrong only in the sense that they can’t be replicated. This is a superficial view of scientific results. Scientific data are deemed scientific because they are in principle replicable—that is, they do not require any special powers or secret sauces to work. Do they have to be replicated? Absolutely not. And most scientific results are not replicated: that would be a tremendous waste of time and resources. Many times the results become uninteresting before anyone gets around to replicating them. Or they are superseded by better results in the same area. Often they lead to a more interesting question and the older data are left behind. Often they are more or less correct, but now there are better ways of making measurements. Or the idea was absolutely right, just that the experiment was not the correct one (there is a famous example of this in the exoplanet field). Just counting up scientific results that turned out to be “wrong” is superficial and uninformative.
The third offense, and by far the worst, is the conflation of fraud with failure. For one thing, this is logically wrong: these actions belong to two different categories, one being intentional and criminal, and the other being unintentional and often the result of attempting something difficult. Conflating them leads to unwarranted mistrust of science. Fraud occurs in science at a very low rate and is punished when discovered as the criminal activity that it is, through imprisonment, fines, disbarment, and the like. This has absolutely nothing to do with results that don’t hold up. They are not produced deceitfully, nor are they intended to confuse or misinform. Rather, they are intended to be interim reports and they welcome revision, following the normal process of science. Portraying scientists as no more trustworthy than the tobacco executives who lied to Congress encourages the purveyors of pseudoscience.
The crisis in science, if there is one, is the lack of support and the resources the nation is willing to devote to training and research. All the other “perversities” Harris claims emanate from that one source. This can be fixed by the administrative people he lists at the end of his article—and unfortunately not by any scientist, leading or otherwise. So why is he casting scientists as the perpetrators of bad science?
Power of partnerships
In “It’s the Partnership, Stupid” (Issues, Summer 2017), Ben Shneiderman and James Hendler advocate for a new research model that applies evidence-based inquiry to practical problems with vigor equal to that previously reserved for pursuits of basic science. The Center for Information Technology Research in the Interest of Society (CITRIS) and the Banatao Institute at the University of California, where I work, take this premise as an essential element of our mission. Research initiatives in sustainable infrastructures, health, robotics, and civic engagement, along with affiliated laboratories and a start-up accelerator, have given rise not only to successful commercialization of research but also to effective partnerships with industry, government agencies, and the nonprofit sector.
Iterative and incremental development, when ideas are tested and refined through give-and-take with stakeholders throughout the process, results in better outcomes for end users and a greater impact for the inventor or team than when they work in isolation. Whereas conventional attitudes might relegate interaction with partners to resolving tedious details of implementation, the proposed model can present real technological and intellectual challenges that advance the science as well as the solution. Some examples from CITRIS investigators include:
- Applications of sensor technology to improve energy efficiency in buildings while preserving privacy of individual occupants.
- Innovations bringing together experts in robotics and machine learning with farmers to develop tools for precision agriculture.
- Development of noninvasive medical devices to monitor blood sugar levels or fetal oxidation.
- Online platforms for engaging citizens in feedback, deliberation, and decision making, involving communities in the City of Vallejo and throughout California, as well as in Uganda, Mexico, and the Philippines.
Permeable boundaries between industry and academia have long prevailed in science and engineering fields, often driven by motivated individuals—faculty members serving as consultants, or industrial fellows spending time on campus. Building on these important relationships, institutional leaders can create a more sustainable and productive model by fostering a welcoming environment for collaborations among organizations. Addressing complex problems involving multiple systems and stakeholders will require an interdisciplinary approach, beyond the scope of an individual researcher or single lab.
How can universities encourage such partnerships, or at least reduce some of the friction that currently impedes their adoption? Shneiderman and Hendler provide useful guidelines for partnerships in their “pillars of collaboration,” and they hail signs of culture change among universities in their policies for tenure and promotion and among funding agencies in their calls for proposals. Universities could go further in three ways: recognize faculty for evidence of work that results in products, policies, or processes beyond (or in addition to) academic publications; simplify the framework in research administration, often a confusing thicket of internal regulations, for working with off-campus organizations; and support and develop career paths for research facilitators, specialized project managers who straddle the worlds of academic research, industrial partnerships, and community engagement. As we face increasingly complex global challenges, these steps can maximize the positive impact of collaborative research through the power of partnerships.
The notions of societal engagement presented by Ben Shneiderman and James Hendler resonate deeply with public research universities, especially those with land-grant heritage. Collaboration among academia, government, and industry is richly woven into our histories as we conduct research and cultivate a next-generation science, technology, engineering, and mathematics workforce to advance the national interest. Members of the Association of Public and Land-grant Universities are particularly invested in working with local, state, regional, national, and international organizations to address societal needs. We celebrate the more than 50 public universities that have completed a rigorous self-evaluation and benchmarking exercise to earn the association’s designation as an Innovation and Economic Prosperity University, demonstrating their commitment to economic engagement.
Though there are certainly deficiencies in the linear model of basic research to applied research to development, we caution that there is still a vital role for inquiry-based fundamental research to build a foundational knowledge base. Surely, Shneiderman and Hendler would agree there is a need for a good portion of research to follow a theory-driven approach without knowing in advance the potential practical impact. Such research is by no means in conflict with service to society; in fact, many of the most pioneering innovations can trace their roots to fundamental research that was unconstrained by short-term commercialization aims.
Still, the authors offer an important reminder that universities must redouble their societal engagement through research that addresses the challenges of our time. We agree on the need to accelerate the advance of fundamental knowledge and its application to solve real-world problems. Thus, we are delighted to be early participants in the Highly Integrated Basic and Responsive (HIBAR) Research Alliance and intend to promote this contemporary concept of broadening participation among stakeholders to produce more accessible research. HIBAR builds on the strong foundation of earlier work by the National Academies, the American Academy of Arts and Sciences, and others, and calls for the adoption of transdisciplinary, convergence, and Grand Challenge research approaches. This collective effort crucially aims to promote partnerships and to advance academic research with increased societal relevance.
Our association is pleased to work alongside partner organizations to further develop HIBAR. This reaffirms our commitment to confronting societal challenges by conducting research focusing on real-world problem solving and engaging a diverse range of stakeholders. We believe this emerging and evolving effort will prove key to addressing the most vexing issues facing society and will produce a prominent impact moving forward.
Ben Shneiderman and James Hendler provide an excellent description of the power of researcher/practitioner partnerships. They describe how partners should agree up front on goals, budgets, schedules, and decision making, as well as on how to share data, intellectual property, and credit.
They also describe a big problem: academic culture often discourages real-world partnerships. They trace this back to 1945, when presidential adviser Vannevar Bush argued that universities best serve the needs of society by disconnecting from those needs. Bush recommended a one-way sequential process whereby ideas begin within purely curiosity-driven research and gradually acquire usefulness while passing through private-sector laboratories to emerge as better drugs, smarter phones, and so on.
But this model agrees poorly with science history, so Shneiderman and Hendler question the academic culture that arose from it. I share their view, with an added nuance: Bush was not all wrong. His isolation doctrine does protect some important academic freedoms. However, it weakens others. Consider researchers who have the ability and desire to help solve key societal problems. An isolationist culture restricts their academic freedom to do so. In effect, it says, “If your research is useful, you do not really belong in a university.” This problem hurts us all. Can it be fixed?
Like Shneiderman and Hendler, I am optimistic, although I fear they may have underestimated the tenacity of current academic culture. True, there are encouraging signs, but most previous culture improvement efforts have failed, despite promising indicators. We need more than promise: we need a plausible, evidence-based plan for achieving the needed changes.
Fortunately, in recent years well-established principles have been developed for improving culture. Change efforts should be collaborative, first developing shared goals that are clearly defined and measurable. They must then surpass three critical thresholds that are often greatly underestimated: there must be enough skillful effort applied; for a long enough time; and it must be true that once the new normal is achieved, enough people will prefer it.
With this in mind, the Association of Public and Land-grant Universities is building on its legacy of public service and addressing challenges by hosting discussions of academic and societal leaders on this topic. By consensus, they clearly defined and named this research mode “Highly Integrative Basic and Responsive” (HIBAR) research, and various partners in the discussions have now formed the HIBAR Research Alliance to further progress. (The previous letter provides additional information about the alliance.) Research partners in the program combine excellence in both basic research and societal problem solving, through four essential intersections. Together, they seek new academic knowledge and solutions to important problems; link academic research methods with practical creative thinking; include academic experts and nonacademic leaders; and help society faster than basic studies yet beyond business time frames.
Alliance members are working to develop promising change strategies. These will target processes for research training, faculty grant allocation, and career advancement. Today, too often individual achievements in one discipline are valued over positive societal impact, creativity, teamwork, and diversity. We can and must improve this. As we succeed, everyone will benefit.
When good intentions backfire
Policy analysts and commentators are fond of pointing out when good intentions can backfire—often for good reason. “The Energy Rebound Battle,” by Ted Nordhaus (Issues, Summer 2017), offers a case in point. While much of the world grapples with finding ways to reduce emissions from the burning of fossil fuels, policies that seek to promote energy efficiency play a central role. But the rebound effect signals a warning. Goods and services become cheaper when they require less energy, and this can stimulate greater demand, supply, or both. As a result, the energy and emissions savings may be less than expected, or may even increase under extreme circumstances.
That the rebound effect exists is not controversial, but there is a wide range of estimates on its magnitude. One take on the literature is that the estimates are smaller when measured more carefully, and larger when more speculative. But it is precisely the more speculative, macroeconomic settings where the potential consequences of the rebound effect are likely to be most important. Whether people leave LED lights on longer than incandescents may be less consequential than how energy efficiency shapes the overall means of production in an economy. The English economist Stanley Jevons raised the important questions back in 1865. Today, research is still needed to get empirical traction on his seemingly paradoxical result.
But there is arguably a more immediate challenge to our understanding of energy efficiency policy. A growing number of studies find wide gaps between the predicted energy savings that come from engineering models and the realized energy savings that arise after adopting new technologies. The rebound effect can explain some of the difference, but the magnitudes are large enough to raise important questions about whether many current efficiency forecasts are overly optimistic. Gaining a better understanding about the potential for such systematic bias is of first-order importance, as it takes place upstream of any subsequent rebound effects.
Finally, it worth noting the underlying reason why the rebound effect is a potential concern when the objective is to reduce emissions. The fact is that promoting energy efficiency is an indirect way to reduce emissions. Policies that instead seek to limit emissions directly or put a price on them (for example, a carbon tax) are not susceptible to rebound effects. In these cases, cost-effective compliance creates an incentive for greater energy efficiency without perverse secondary effects. In reality, politics may explain the focus on energy efficiency as a matter of expediency, but the long-term goal should be to promote a more direct linkage between our policies and objectives.
Returning from the brink
As Sheila Jasanoff suggests in “Back from the Brink: Truth and Trust in the Public Sphere” (Issues, Summer 2017), we are unlikely to successfully resolve the current crisis in the politics of truth through simple appeals to trust in the authority of science. Not only does the historical record she cites show that controversies about policy-relevant science are a recurrent feature of politics in the United States, but there is also no reason to expect such debates to ever disappear.
Virtually every area of policy making today involves technical expertise, and if one includes the social and behavioral sciences, it is difficult to think of exceptions. Moreover, science controversies rarely concern the most solid and well established “core” of scientific knowledge. Instead, these disputes typically take place near the frontiers of research, where new knowledge and emerging technologies remain under construction and evidence is often incomplete and provisional.
When uncertain science meets controversial policy choices and conflicting values, a simple distinction between facts and values tends to break down. Indeed, setting the evidentiary threshold needed to justify treating a scientific claim as a policy-relevant fact becomes a value-laden decision.
In such a context, appeal to the authority of contested “facts” is a weak form of argument, easily dismissed as grounded in bias. Reaffirming our commitment to democratic values, inclusiveness, principles of open decision making, and basic norms of civil public debate offers a more promising strategy for advancing the goal of producing Jasanoff’s “serviceable truths.” This is especially true if this commitment is coupled to a concerted effort to hold accountable those who violate those norms or enable others to do so.
Eyes on AI
In “Should Artificial Intelligence Be Regulated?” (Issues, Summer 2017), Amitai Etzioni and Oren Etzioni focus on three issues in the public eye: existential risks, lethal autonomous weapons, and the decimation of jobs. But their discussion creates the false impression that artificial intelligence (AI) will require very little regulation or governance. When one considers that AI will alter nearly every facet of contemporary life, the ethical and legal challenges it poses are myriad. The authors are correct that the futuristic fear of existential risks does not justify overall regulation of development. This, however, does not obviate the need for monitoring scientific discovery and determining which innovations should be deployed. There are broad issues as to which present-day and future AI systems can be deployed safely, whether the decisions they make are transparent, and whether their impact can be effectively controlled. Current learning systems are black boxes, whose output can be biased, whose reasoning cannot be explained, and whose impact cannot always be controlled.
Though supporting a “pause” on the development of lethal autonomous weapons, the authors sound out of touch with the ongoing debate. They fail to mention international humanitarian law. Furthermore, their examples for “human-in-the-loop” and “human-on-the-loop” systems—Israel’s Iron Dome and South Korea’s sentries posted near the demilitarized zone bordering North Korea—are existing systems that have a defensive posture. Proposals to ban lethal autonomous weapons do not focus on defensive systems. However, by using these examples the authors create the illusion that the debate is primarily about banning fully autonomous weapons. The central debate is about what kind of “meaningful human control” should be required to delegate the killing of humans to machines, even machines “in” or “on” the loop of human decision making. To make matters worse, they suggest that a ban would interfere with the use of machines for “clearing mines and IEDs, dragging wounded soldiers out of the line of fire and civilians from burning buildings.” No one has argued against such activities. The paramount issue is whether lethal autonomous weapons might violate international humanitarian law, initiate new conflicts, or escalate existing hostilities.
The authors are strong on the anticipated decimation of many forms of work by AI. But to date, political leaders have not argued that this requires regulation or relinquishing research in AI. Technological unemployment is not an issue of AI governance. It is a political and economic challenge. How should we organize our political economy in light of widespread automation and rapid job loss?
From cybersecurity to algorithmic bias, from transparency to controllability, and from the protection of data rights and human autonomy to privacy, advances in AI will require governance in the form of standards, testing and verification, oversight and regulation, and investment in research to ensure safety. Existing governmental approaches, dependent on laws, regulations, and regulatory authorities, are sadly inadequate for the task. Governance will increasingly rely on industry standards and oversight and on engineering means to mitigate risks and dangers. An enforcement regime to ensure that industry acts responsibly and that critical standards are followed will also be required.
In “Philosopher’s Corner: Genome Fidelity and the American Chestnut” (Issues, Summer 2017), Evelyn Brister presents a well-written and balanced account of the state of affairs regarding efforts to work around the blight plaguing these trees, and as someone who is in the middle of things, I have nothing to debate. But I would like to clarify and expand on some points.
Her claim that “Restoring the American chestnut through genetic engineering adds about a dozen foreign genes to the 38,000 or so in its genome” needs some clarification. It is true that we have tested dozens of genes singularly, and in combinations of two and three genes, but the first trees we will use in the American Chestnut Research and Restoration Project will have only two added genes. This is a small point, but the more important point is that the genetically modified American chestnut that we will use first will have all of its original genes. Therefore, it should be as fully adapted to its environment as the original, with only blight resistance added. Unlike in hybrid breeding, where you may introduce genes for unwanted traits, such as short stature or reduced cold hardiness, genetic engineering keeps all of the original genes intact and adds only a couple of genes.
In our work in the restoration project, we used an oxalate oxidase (OxO) gene taken from an enzyme in wheat to confer blight resistance in the American chestnut. This enzyme detoxifies the oxalic acid that the troublesome fungus uses to attack the tree. So it basically disarms the pathogen without harming it. But this OxO gene isn’t unique to wheat. Oxalate oxidase enzymes are found in all grains tested to date, as well as in many other plants, such as bananas and strawberries. In fact, the chestnut itself has a gene that is 79% similar to a peanut oxalate oxidase. So, the “genome integrity” that Brister discusses is not a simple concept, and defining it simply by the source of a few added genes is meaningless. It is better defined by how large of phenotypic, or functional, change is being made and how this affects the organism’s place in the environment. With the American chestnut, the change is very small and allows the tree to return to its natural niche in the forest.
Genetic engineering isn’t the answer to all pest and pathogen problems, but in some cases it is the best solution. It is only one tool, but it is a useful tool that shouldn’t be left sitting idle in the toolbox.
Should a genetically modified, blight-resistant American chestnut be reintroduced to eastern North American forests? Evelyn Brister contends that this question cannot be easily answered by an objective, all-knowing science, but is instead rooted in philosophical concerns about genetic purity and naturalness. Her discussion of genome fidelity and comparison of breeding and genetic modification offers valuable nuance to the public discourse on the American chestnut and genetically modified organisms (GMOs) more generally. But in her focus on philosophies of this tree’s genome, Brister seems to downplay concerns about harm to health and environment and social and economic impacts, noting that GM chestnuts are more likely to cause ecological good than harm, and that “the economic imperialism that has followed corporate control of GMO intellectual property” is a “nonissue” because researchers have pledged to make the GM tree publicly available.
In my own research on chestnut restoration, I have found that there are crucial political, economic, and ecological concerns that drive opposition and hesitation to GM chestnuts, and these concerns extend beyond issues of genome fidelity. Some observers worry, for example, that the blight resistance of a GM chestnut may not be sustained over the long term if the blight fungus adapts or if added genes are silenced, rendering chestnut restoration a costly and wasteful undertaking. Others hesitate to champion a project that has received financial and material support from the biotechnology industry, including ArborGen and Monsanto, fearing that the chestnut is being used as a ploy to sell the US public on the value and necessity of GM trees. Relatedly, there is concern that rapid regulatory approval of a GM chestnut will set a precedent for how commercial GM trees are viewed and regulated in the future.
Still other people are primarily concerned with inadvertent ecological effects: How will a genetically novel tree affect existing forest dynamics, food webs, and carbon cycling? How will it affect the spread of invasive pests, such as the gypsy moth, and health risks, such as Lyme disease? There is some initial evidence, for example, that gypsy moths may feed more heavily on a transgenic variety of chestnut, possibly leading to increases in gypsy moth populations, as Keith Post and Dylan Parry noted in an article in Environmental Entomology. Other research has suggested that chestnut restoration—whether through backcross breeding or GM techniques—may alter the geography of Lyme disease and potentially increase risk of transmission.
In short, opposition and hesitation to a GM chestnut are not merely rooted in philosophical concerns about genome fidelity, but are also centered on the broader political, economic, and ecological impacts that the tree may have in the world.
Brister notes in her conclusion that the debate about a GM chestnut “requires that we weigh metaphysical concerns about genetic purity with practical and ethical concerns about forest diversity,” which suggests that opposition is based primarily on metaphysical concerns whereas support is based on practical and ethical concerns. She deems it likely that “maintaining healthy forests will require not only the use of genetic technologies to modify tree species, but also to control the pests that are killing them,” and she further states that “we can’t afford to miss the value of our forests by getting lost in debates about the trees.”
This line of reasoning is tempting, but also silencing: it closes off debate and insinuates that questioning GM trees may be detrimental to the state of forests more broadly. I would encourage everyone to ask: Where does this idea—that genetic modification of tree species and pests is necessary to maintain forest health—come from, and what evidence is there for it? Perhaps more important, what other options and strategies are overlooked, foreclosed on, or disinvested in when we decide that healthy forests require molecular interventions?
Evelyn Brister describes two research programs that aim to restore the American chestnut to US forests by making it blight-resistant. One program has created a hybridized American chestnut by using traditional genetic backcrossing; the other has created a blight-resistant genetically modified (GM) American chestnut. In her article, Brister explores the likely objections to the GM option.
Brister focuses on loss of “natural integrity” as the main concern raised by the GM chestnut. But as she indicates, the idea of natural integrity is problematic. It is not obvious that a hybridized chestnut has more natural integrity than a GM chestnut. And why should natural integrity matter anyway, especially when forest diversity is at issue?
Brister is right to raise these questions, but there’s more at stake than she suggests. She identifies natural integrity with “genetic integrity” or “purity,” interpreted as something like “closeness to the original genome of the American chestnut.” Certainly, some people will be concerned, in both cases, that the genetic composition of the new chestnut trees lacks purity in the sense of genetic closeness to the ancestor chestnut. But worries about naturalness frequently also concern how something came about, not just what it is composed from.
The degree of worry about both types of chestnuts might be related to the degree of intentional human interference involved in producing them. In the case of the GM chestnut, this is especially likely to lie behind concerns about insertion of a wheat gene to enable resistance to blight. It’s not just that the wheat gene is less natural in the sense that it is normally located in a more genetically distant plant. It’s also that the wheat gene could not have gotten there without human agency. Likewise, hybridizing an American chestnut with a domesticated Chinese chestnut draws on the long heritage of human agency required for the creation of domesticated trees.
Opening up questions about human agency, though, introduces other broader concerns about “wildness.” Suppose either of these chestnut varieties is planted in “the wild.” Would the forests into which these trees are introduced remain “wild” after we have deliberately released them? And would they require further human interventions once they have been planted, essentially creating a managed woodland?
Unlike Brister, we think that the potential ethical conflict here is not just about genetic purity, but that much broader wildness values are at stake. These trees will have a genetic set determined by people, and they will be planted and managed at a time and place, and for a purpose, determined by people.
Of course, perhaps there is no realistic alternative to a human-originating forest. Or even if there were such an alternative, it may be that the value of forest diversity should indeed outweigh not only genetic purity but also other wildness values. But nonetheless, we should not underestimate the importance of protecting the remaining wildness in US forests.