Machine Smart

The subject of intelligent machines that decide that they don’t have much use for us has haunted our species at least since golems first were mentioned in the Talmud. And more recently, the issue of superintelligence has been worked over by science fiction authors from Isaac Asimov to Vernor Vinge and beyond. We’ve thought about this a lot.

Now philosophers have their turn. Oxford University philosopher Nick Bostrom’s book Superintelligence gives the subject a thorough treatment. His conclusion? We better be damn careful what kind of intelligent machines we build.

Bostrom’s erudition bursts from every page. He has a background in physics, computational neuroscience, and mathematical logic, as well as philosophy. He uses all of these disciplines, and more, to advance his argument, which has four main parts.

Part 1: Machine intelligence is feasible. Bostrom reviews the current approaches to computer-based intelligence and divides them roughly into brain emulation and pure artificial intelligence (AI) approaches, with hybrids and mongrels in between.

Brain emulation intelligence works by completely emulating a human brain—down to the level of neurons and dendrites and cortical columns—in such detail that the person instantiated in that brain comes to life in the artificial medium of computer hardware and software.

Pure AI takes a different course, attempting to build in software a pure artifact that acts intelligently but not in any way that traces a heritage to our native wetware (other than the important detail that we designed the artifact in the first place.)

91SPgA2SbLL._SL1500_

Bostrom maintains that both approaches could feasibly lead to AIs, although he believes that the two approaches have different strengths and weaknesses, and may lead to different future scenarios. Because we are presumably just “running mind software” on a different hardware platform, Bostrom believes that brain emulation AIs are more likely to “be like us,” whatever that means, but pure AIs, because all of the design elements are explicit, may be easier for our minds to comprehend and predict. He concludes that brain-emulation AIs are likely to come on the scene sooner, but that either form may arrive by mid-century.

Part 2: Bostrom then argues that once an AI exists, it may (and likely will) rapidly improve its own intelligence. By “rapidly,” he means within seconds or hours or days, not months or years. He believes that there may be no limit to this self-improvement, to the point where an AI develops what Bostrom calls “decisive strategic advantage” and is able to neuter potential alternatives or adversaries and, rather rapidly, consolidate its power as what he calls a “singleton.” Such a singleton would, in effect, control the future of humanity, what Bostrom calls its “cosmic endowment.”

Part 3: There is no special reason to believe that a singleton’s intentions would be benign. Bostrom discusses at length what might be the “final purposes” (his term; we might call them “ultimate goals” or “life purpose”) of such an all-powerful superintelligence, and how we might influence those purposes. This line of inquiry, which occupies most of the book, is a hash of game theory considerations and speculations about the nature of an AI and its capabilities. How might we, for example, prevent a singleton AI from converting the entire observable universe into paperclips if that were its final purpose?

Part 4: In the final chapters, Bostrom discusses what is to be done. How should we act in the face of what he considers the practical certainty that a superintelligence will be developed—if not within decades, perhaps within a century or two—whose motives might not be benign and whose ability to act on its motives might be unstoppable?

He advises us to, in effect, form a League of Extraordinary Humans whose purpose is to systematically and strategically discuss the emergence of a superintelligence. Not to utterly make fun of Bostrom’s approach, we might call this an Iron Rice Bowl (the Chinese term for occupation-for-life) for Philosophers.

What are we to make of Bostrom’s case?

In the first place, it is a serious argument. If we might in the relatively near future invent our cosmic replacement, then we are required, in the name of humanity’s cosmic endowment (which Bostrom calculates to comprise some 1058 real or virtual future lives), to give the matter some thought. And Bostrom is quite correct that this kind of problem might benefit from long study. But what are our chances of affecting the outcome?

The core problem is that the leap between today’s “intelligent” software and a superintelligence is unknown, and our temptation is to mystify it. Whether we are building brain emulations or pure AIs, we don’t understand what would make them “come to life” as intelligent beings, let alone superintelligent.

“Machine learning” software today uses a statistical model of a subject area to “master” it. Mastering consists of changing the weights of the various elements in the model in response to a set of training instances (situations where human trainers grade the instances: “yes, this is credit card fraud,” “no, this is not a valid English sentence,” etc.). Clear enough, but it just doesn’t seem very much like what our minds do.

And the path from this kind of “learning” (it is an anthropomorphism even to call it learning) to what “human-intelligent” agents do is completely unclear.

It might require nothing but simple scale. A small “machine learning” system may be subintelligent, and at some size, if we had enough computing power and enough elements in the model and enough training instances and enough support, intelligence might “emerge.”

This has certainly been the mantra of AI for some decades, and it may have been what technophiles hoped for when IBM’s Watson software beat two Jeopardy champions a couple of years back.

Sadly, Watson has not gone on to master, on its own or even with expert human help, any general corpus of knowledge. At a Watson showcase event last year, the demo apps were all mired in the swamp of endless training and re-training that I recall from my AI days in the ‘80’s. There was no indication that unleashing Watson on different domains and at different scales was going to lead to general intelligence, although one is free to hope.

Another path to general intelligence, as some Husserlians, such as Hubert Dreyfus or other more anthropologically-inclined researchers think, may involve human feelings, purposes, or drives. If the AI wanted something badly enough (not to be shut off, for example), the argument goes, then it would learn from its “experiences” and get smarter. Combine “desires” like this with natural selection at scale via a genetic-selection or evolutionary approach, and you might gradually enhance the intelligence of primitive agents. With machine speeds, this could happen quickly.

The problem with this approach has been coming up with a mechanical definition of “feelings,” “purposes,” or “drives.” We can write some software that is aimed at doing something, but it is missing something of what we associate with a drive: urgency, existential angst, whatever. Maybe we are confusing the qualia of purpose with the essence of it, and maybe a human-infused purpose can launch software on the road to agency. But at some point it has to have “its own” purposes, whatever that means.

A third approach has been to insist that there is something implicit in our brains that is unique, whether we call this uniqueness “embodied-ness” (with Dreyfus) or “bearing human motivational ancestry” (with Bostrom). Is there something implicit in the organization of our brains that renders us intelligent? If so, then emulating a brain should supply it, unless an emulated brain is like a silk flower. As Dreyfus remarked at one point, we don’t think that the software simulation of a thunderstorm should get us wet, do we? Why should the software emulation of a brain embody whatever makes us intelligent?

This “missing link” between AI software today and general intelligence tomorrow wouldn’t be so important if it weren’t at the heart of Bostrom’s argument about how to control emerging AIs. If intelligence emerges from scale or from endogenous machine “drives” or for embodied-ness, how can we hope to put a governor on the motives of machine intelligences? They would toss our flimsy moral strictures aside as easily as adult humans toss away Santa Claus.

But talking about children does give us some suggestions about an approach to making AIs moral. Sigmund Freud believed that children form a superego at an age when they are “impressionable” but not yet adult in their reasoning. A superego, in his theory, is a moral mechanism that functions imperfectly (filled with demons and fascists as well as avatars of light and Christ figures) but is good enough to guide most adults to a reasonable course of moral behavior. Maybe we can fashion a superego for our young AIs and give them enough guidance to allow them to muddle through when they reach adulthood without turning the entire universe into paperclips or destroying us so we don’t ask them tough questions.

That is Bostrom’s great hope, that we can issue a suitable instruction to emerging AIs (something along the lines of “do the best thing we mean for you to do, even if we can’t say it precisely”) that will constrain their range of possibilities when they become fully superintelligent. All of us would benefit.

Climate Redux: Welcome to the Anthropocene

There are few topics as politically and ideologically contentious as anthropogenic climate change and the possibility of responding by deploying geoengineering technologies. Despite, or because of, all the Sturm und Drang, however, the current discourse is both misdirected and unhelpfully superficial. It is misdirected in that it frames climate change as a problem that can be solved, either through policy or technological silver bullets, rather than a condition, inherent to a planet with seven billion people, which must be managed. It is superficial because although it may be the most visible concern right now, the real challenge is not climate change. The topic that truly deserves our attention is the Anthropocene, the new stage of human history characterized by the growing significance of human actions in the overall state of the planet. Focusing exclusively on climate change is equivalent to treating symptoms rather than attacking disease. The point is not that climate change should not be a concern or that its effects do not have to be managed. Rather, the sad irony is that, despite the best intentions of the participants, the climate change and geoengineering discussions have so far been a way to evade knowledge and responsibility, not to extend them.

In 1992, the United Nations (UN) Rio Earth Summit adopted the UN Framework Convention on Climate Change (UNFCCC), with the objective, stated in Article 2, of achieving “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.” The implementing treaty, the Kyoto Protocol, was adopted in 1997 and entered into force in February 2005. Since 1992, except for a brief dip in 2008-2009 when global economic activity took a sharp dive, emissions of the most important anthropogenic greenhouse gas, carbon dioxide (CO2), have steadily increased; anthropogenic emissions of methane, after a period of relative stability from around 1999 to 2006, are also climbing.

The increasingly obvious failure of the UNFCCC process, combined with growing social concern about global climate change, has led to increasing interest in geoengineering, which the UK Royal Society defines as “deliberate large-scale intervention in the Earth’s climate system, in order to moderate global warming.” Geoengineering technologies are further broken down into two categories. CO2 removal (CDR) technologies remove CO2 from the atmosphere through biological or industrial means. Solar radiation management (SRM) technologies reflect some of the incoming energy from the sun back into space before it has a chance to reach the Earth’s surface. Each of these technologies has a unique mix of potential costs and benefits. For example, one result of high ambient levels of CO2 in the atmosphere is increasing acidification of the oceans with potentially deleterious effects on marine life; CDR technologies would mitigate such effects, whereas SRM technologies would not.

No reputable scientist disputes the planetary greenhouse effect. Mars, with little atmosphere and therefore little atmospheric radiation absorption, is cold, whereas Venus, with a substantial atmosphere composed mainly of CO2, has a surface temperature of about 870 Fahrenheit. That water vapor, CO2, and methane all absorb energy at crucial wavelengths to contribute to increased atmospheric energy content is well established. Nonetheless, to observe that this domain is contentious is a gross understatement. The controversy is not really over whether the Earth is subject to a greenhouse effect, because it clearly is; moreover, human activity produces CO2 and methane that are known to increase the greenhouse effect. But there is disagreement over whether such human influences are meaningful given the complex dynamics involved and over how worried we should be about any resulting changes in temperature and other aspects of climate.

Technically, such questions could be approached objectively, with all parties agreeing on a set of factual predicates for subsequent policy debates. Even the most casual observer will, of course, recognize that this is not what has happened. The language used to characterize those perceived as less committed to immediate action to reduce emissions is notably unscientific: Boston Globe columnist Ellen Goodman wrote that “global warming deniers are now on a par with Holocaust deniers.” In 2011, the UK Energy and Climate Change Minister demanded immediate action on climate, saying that “[g]iving in to the forces of low ambition would be an act of climate appeasement,” and that “[t]his is our Munich moment,” referring to the 1938 Munich agreement that ceded Czechoslovakia to Hitler. James Hansen, a U.S. National Aeronautics and Space Administration climate scientist, wrote in a 2009 article that “coal is the single greatest threat to civilization and all life on our planet,” and that “The trains carrying coal to power plants are death trains. Coal-fired power plants are factories of death.” In turn, their opponents have coined the term “climate Nazis” because of their demands for heavy-handed government regulation. This is the rhetoric of morality. One can rationally discuss science and technology options, but one does not negotiate with evil.

Similar arguments swirl around geoengineering. These fall into two general categories. One involves the uncertainties and potentially significant risks of deploying such technologies. No amount of small-scale research will be sufficient to reliably predict all the results of this planetary experiment. The second category is the “moral hazard” argument: no geoengineering technology should be researched, developed, or deployed, because making it an option reduces pressure on individuals to reduce greenhouse gas emissions. Supporters of this position argue that major changes in lifestyle and perspective are very seldom achieved without significant forcing pressure. Opponents respond that a refusal to research geoengineering is an unethical form of social engineering.

Both the climate change and geoengineering debates are premised on a false dichotomy. The choice is not between the Kyoto Protocol and geoengineering. Rather, the choice is between a world view in which human activity has only isolated effects on the planet and an acceptance of a new reality in which human activity is unavoidably a major Earth system.

Anthropogenic climate change is merely a symptom of a far more profound emergent reality. Revisions to the failing Kyoto process or the premature deployment of a powerful technology fix are not what is needed. Rather, what is needed is an understanding that we have now crossed a threshold from a past where humans were but one species wandering the planet to a present where humans and their myriad activities, institutions, and aspirations now increasingly affect all planetary systems. Failure to accept that responsibility by burying one’s head in romantic ideologies or loud pontification at this point in human evolution is not just irresponsible, it is profoundly unethical. And it will have serious implications, for if climate change is the first test of humanity’s ability to operate rationally and ethically in the Anthropocene, we should try hard not to fail, and to learn from the experience. Neither of those outcomes appears probable on current trend.

Adjusting the focus

I am not saying that global climate change should not be addressed, both through mitigation and adaptation, and quite possibly through scaled introduction of geoengineering technologies. What I am saying is that even if we were to reduce the carbon content of the atmosphere to pre-industrialization levels—say, 280 ppm CO2—we wouldn’t be restoring the planet to its pre-industrial state. Complex adaptive systems do not have a default setting to which they can revert. We can’t de-Anthropocene the planet.

And this is the nub of the issue. The climate change phenomenon, and the debates swirling about it, are worrying, but not just because they may challenge the adaptive capability of individuals, societies, institutions, and other species. They are worrying because they illustrate, all too clearly, the inadequacy of our nascent efforts to respond to the challenges of the Anthropocene—the Age of Humans. If climate change and other similar issues, such as reductions in evolved biodiversity or perturbations in the nitrogen, hydrologic, and phosphorous cycles, are isolatable problems that can be addressed by the familiar methods of reductionism and environmental regulation, we are psychologically and institutionally prepared to respond appropriately. If, however, climate change is simply one of a number of coupled emergent behaviors generated by seven billion people with their vast array of institutions, cultures, and economic and technological systems, that approach is no longer viable. And the first step in adjusting to that reality is shifting to an adequate framing of the reality of the systems we’re dealing with.

The first challenge, then, is simply to recognize that we are, in fact, emerging into the Anthropocene. In this new era in the history of our planet, human activity is surfacing as one of the most important Earth systems, rivaling and stressing the natural systems that govern the planet’s habitability. To ensure a sustainable future as a planetary species, humanity needs to develop the capacity to manage these complex, interwoven systems. Developing this capacity requires that we adopt an integrated planetary perspective. Too often, we view humanity as an imposition on the planet. In this view, Earth can be restored to a pastoral golden age by reducing (ideally, removing) the human influence from nature. This perspective fundamentally misunderstands the Anthropocene as an event that can be reversed.

A more productive perspective is to view the Anthropocene as a natural transition resulting from a very recent innovation: the evolution of tool-using intelligence and the consequent rise of technological civilization. This innovation is as irreversible and disruptive to Earth’s systems as previous major evolutionary innovations, such as the evolution of land plants, the development of skeletons, the origin of multicellular organisms, and the invention of oxygen-producing photosynthesis. Like these prior milestones in the history of life on Earth, the genie of tool-using intelligence cannot be stuffed back into the evolutionary bottle. Evolution is never retrograde. Instead, we need to aim for pragmatic, sustainable design and management of a planetary ecosystem that includes the human system as an integral, permanent, and constantly evolving component—an intelligent part that can impact the planet thoughtfully, as well as thoughtlessly.

As the previous discussion has made clear, we are far from having the capacity, as a species, to be responsible designers and managers of our planetary ecosystem, despite the clear and present need. The good news is that our comprehension of the physical, chemical, and biological systems that go into making a habitable planet deepened dramatically in recent decades. Although there is much yet to learn, our knowledge is expanding at least as rapidly as our recognition of the environmental, energy, and resource challenges before us.

At the same time, the accelerating pace of technological evolution challenges our insight into the human system that drives the Anthropocene. Areas as diverse as nanotechnology, biotechnology, information and communication technology, robotics, and cognitive sciences are advancing in ways that are ever more complex, rapid, and difficult to predict, but that are converging in a way that makes humanity itself a design space. The redesign of the human as currently constituted is an increasingly probable scenario. These changes will not only accelerate human effects on Earth’s natural systems, but also pose significant and as-yet-unpredictable challenges to the social systems that modulate these effects. We do not yet have the capability as a species to anticipate and respond ethically, rationally, and responsibly to these coming challenges. That said, we can at least begin to develop some basic principles that would support more effective institutional and policy responses.

The correct answer is none of the above. The challenges of the Anthropocene are not “problems” with “solutions;” rather, they are conditions, often highly coupled to other conditions and systems that can at best be managed.

Be prepared. An important mechanism for managing Anthropogenic challenges is the conscious cultivation of technological, institutional, and social options—a toolkit for adapting rapidly to changes in Earth systems.

Practice makes perfect. Borrowing the techniques of defense and foreign policy strategists, use scenarios and games to expand institutional perception, thinking, and agility.

When in doubt, doubt. The complex adaptive systems that characterize the Anthropocene are inherently unpredictable; it follows that predictions regarding future paths and outcomes should always be regarded skeptically.

Diversity enables adaptability. The inherent unknowability of complex adaptive systems privileges pluralistic institutions and cultures.

Scale matters. Many Anthropogenic systems behave in a linear fashion at small scale but can become unpredictably non-linear at larger scales.

Stay in school. Because the Anthropocene is characterized by evolving conditions that will confront us with new ethical, social, and technological challenges, it demands continuous learning.

The conflictual, partisan, and superficial environment that has developed around the issue of anthropogenic climate change is unfortunate, not just because it is unproductive and ineffective. It has also served to mischaracterize and disguise the full magnitude and complexity of the challenge posed by the Anthropocene, an era in which return to any sort of fabled, golden, pastoral age is fantasy. What is needed now is not policy “solutions,” which for the most part will prove partial and inadequate, nor technological silver bullets, which are likely to be far more disruptive and costly than expected. Rather, what is needed is the courage to perceive and accept the world as it is today, and to appreciate the difficulty of responsibly managing it. That the planet is increasingly shaped by the activities and choices of one species cannot be denied; that we know how to do so consciously and ethically cannot be confidently asserted. That is the real challenge we face at the beginning of the Anthropocene.

Braden Allenby ([email protected]) is President’s Professor, Civil, Environmental, and Sustainable Engineering, and Lincoln Professor of Engineering and Ethics at Arizona State University.

Have Universities Overbuilt Biomedical Research Facilities?

In a September 10, 2010, Science editorial, Bruce Alberts called attention to what he perceived to be an overbuilding of biomedical research facilities by the nation’s universities and medical schools, driven by the federal government policy of providing reimbursement of the amortization costs incurred by institutional borrowing to construct these facilities. Based on this premise, Alberts called for a reconsideration of that policy, a plea recently reiterated in the highly cited July 1, 2014, Proceedings of the National Academy of Sciences article that Alberts co-authored with Harold Varmus, Shirley Tilghman, and Mark Kirschner. That article also addressed a number of other issues, such as the size of the biomedical research workforce, which we do not consider here.

The profound implication of a change in reimbursement policy compels us to carefully examine the premise that there is an excessive amount of biomedical research infrastructure. Were the government to cease providing reimbursement of these amortization costs, the ability of most universities and medical schools to offer research facilities when they are needed would be limited drastically. Although the government has made academic biomedical research a national priority since the end of World War II, and underscored that priority during the recent five-year doubling of the National Institutes of Health (NIH) budget, it has not provided significant funds for the construction or renovation of the buildings in which the research is to be performed since 1970. Instead, the academic institutions must pay the construction or renovation costs up front and hire research teams, often with substantial “start-up packages,” in the hope they will be successful in competing for federally sponsored research funds. Frequently, the institutions must borrow the money needed for construction. When the institutions’ staff members are successful in obtaining federal research grants, a portion of the construction costs are recovered via indirect cost reimbursement, based on the fraction of the space used for federally funded research. Without this reimbursement, few institutions could afford the construction. It is this reimbursement of borrowed money, recovered over the estimated lifetime of a facility, that the Alberts proposal would eliminate.

It should be noted that this federal government approach of expecting institutions to construct facilities and hire research staff is of substantial benefit to the government and the nation. Because the government’s extramural research programs do not have a commitment to the specific facilities institutions have constructed or to the researchers on an institution’s payroll, they can fund the research proposals that the peer review system deems to be the best. The institutions take on all the financial risk in creating the facility and the substantial financial expense in its initial staffing and operations.

To our knowledge, however, no data have been provided to support Alberts’s premise that there is more national biomedical research space than is necessary to meet the nation’s needs. We seek to remedy that situation somewhat by providing such an assessment, admittedly coarse-grained, and to present some of the policy implications that would result.

A useful proxy for the need for academic biomedical research space is the amount of NIH research and training funding competitively awarded to researchers at universities and medical schools. That funding is the primary determinant of the numbers of researchers and trainees, the range of investigative activities, and the research instrumentation and special facilities that generate the need for research space.

Perspectives graph

Figure 1 presents the evolution of the ratio of academic biomedical research space to inflation-corrected NIH academic research and training funding from fiscal years 1987 to 2011. That ratio is essentially constant at slightly below 7,000 square feet per million dollars between 1987 and 1995. It then drops sharply to a minimum value of about 4,300 sq ft/$M in 2003 and rises to a value of only 5,700 sq ft/$M in 2011 (the last year for which data are publicly available).

If 7,000 sq ft/$M is taken as a reasonable proxy for a “norm,” the data suggest that even in 2011, eight years after completion of the five-year doubling of the NIH appropriation, and one year after the publication of Alberts’s Science editorial, there was a significant “shortage” of academic biomedical research space, and that the space construction initiated during the doubling was in response to real research needs and opportunities. Certainly, these data provide no evidence of overbuilding of that space.

Given the absence of evidence for systemic overbuilding, there is no apparent justification for altering federal reimbursement policies related to the construction of research facilities. As indicated above, the nation is critically dependent on academia’s ability to construct or renovate facilities as the need arises because the NIH has not provided any significant direct funding for construction of extramural research facilities for more than four decades. It seems unwise to put in place policies that discourage such academic construction. To cite just one example, the current Ebola epidemic reminds us that we must retain an adequate supply of space to conduct research on dangerous pathogens in safe, dedicated facilities.

It may be, though, that some institutions or classes of institutions have overbuilt. Suppose, hypothetically, that there is a group of institutions that had the average ratio of space-to-funding prior to the doubling. Then, they built more space in response to the rapid growth in NIH funding, but did not succeed in acquiring the average funding per unit space. For example, during and immediately following the NIH doubling, there was substantial construction of new research facilities in medical schools, across the spectrum of past success rates in winning NIH research awards, as well as recruitment of research faculty expected to continue their NIH funding or obtain new funding. Moreover, many new medical schools, established in the past decade, often with state support and largely justified by concerns over population aging and predictions of physician shortages, have publicized their construction of “state of the art” research facilities and their intention of engaging in biomedical research, developing robust technology transfer programs, and attracting new “high-tech” industries to their regions. If these institutions were removed from the tally on the graph, it would make the situation look even worse for the remaining institutions, whose average space-per-unit funding would have decreased more than what is shown on the graph. This would make a policy change that seeks to limit reimbursement for construction debt even more unwise. This and related possibilities are worthy of analysis if relevant data are available.

It is possible that construction underway in 2011 and subsequently has or will alleviate the apparent shortage of research implied by the data in Figure 1 and overshoot the mark. That remains to be seen. Similarly, it is conceivable that the practice of biomedical research may evolve in ways such that less space per dollar is needed, although we have difficulty envisioning this. In any event, speculation based on these possibilities is not a sound basis for potentially damaging policy changes or a substitute for careful analysis.

The elimination of government reimbursement of the amortization costs incurred by institutional borrowing to construct or renovate research facilities would markedly decrease the ability of universities and medical schools to provide new or modernized facilities as they are needed to exploit new and exciting technologies, to improve safety, and to respond to new research opportunities and needs. Biomedical research and the nation would suffer as a consequence.

Arthur Bienenstock ([email protected]) is professor of photon science (emeritus) and special assistant to the president for federal research policy at Stanford University. Ann M. Arvin is vice provost, dean of research, and Lucile Salter Packard Professor of Pediatrics and Microbiology and Immunology at Stanford. David Korn is a professor of pathology at Harvard Medical School and the former inaugural vice provost for research at Harvard University.

Informing Public Policy with Social and Behavioral Science

Many of the challenges facing our society today—from military preparedness to climate change—have a social or behavioral dimension, as do the policies considered by government officials to address them. A better understanding of the factors that influence how people act and interact can help policymakers design more effective procedures.

The vast majority of policymakers are not trained as scientists. As a result, they have varying degrees of understanding about how the social and behavioral sciences can help them do their jobs. Likewise, the vast majority of researchers have little to no policymaking experience. As a result, researchers often approach policymakers in ways that policymakers find unhelpful.

As social scientist Robert Cialdini observed at a gathering of researchers and policymakers on Capitol Hill a little over a year ago, if the social sciences were a corporation, they would be renowned for research and development. But they lack a crucial element: a shipping department. Social and behavioral scientists do not have a distribution system to deliver what they know to decisionmakers, packaged in a form that they can use. As a result, a wealth of potentially useful information that could yield practical benefits for the public never realizes that potential.

I propose that the social and behavioral sciences move quickly to develop an efficient and effective “shipping department”—a mechanism for delivering the most useful findings and methods into the hands of public policymakers. Having been both an active researcher and a member of Congress, I have seen how the absence of a concerted effort to effectively communicate social science has limited its impact in critical policy contexts. The social and behavioral sciences can increase their social value by working to translate and transfer their insights to real-world policymakers in ways that stay true to the content of the scientific research, while responding to and reflecting the policymakers’ actual informational needs. In other words, to help policymakers better understand science, it is critical that social scientists better understand the kinds of information that policymakers do and do not need.

To that end, I suggest five actions that can help the social and behavioral sciences make a more positive impact on policy and our society. Although any one of these actions taken independently could increase the relevance and public value of the social and behavioral sciences, each endeavor will have greater value if all are pursued together. Let us consider each of the proposals in turn.

Use a collaborative, consensus process to identify robust scientific methods and findings that are of potential interest to policymakers. The social and behavioral sciences should join together to create a high-level, cross-disciplinary project involving leading experts in communication and learning along with actual policymakers. The goal would be to produce practical, empirically driven, and readily applicable presentations that are accessible and usable for policymakers at different levels of government.

This would, emphatically, not be a typical academic work. Rather, it would be practical and translational in purpose, and it would be guided by an awareness of real-world policy needs and how our methods and findings can help impact that policy. Just as important, it would derive communicative content and presentational strategies from the substantial knowledge base on these topics that the social and behavioral scientists have taken the lead in producing.

From the research side, the consensus process would be guided by the following questions: What do we believe are the most significant principles of how social and behavioral scientists approach questions relevant to public policy? Which of our theoretical insights and robust empirical discoveries are most relevant from a policymaker’s point of view? At the same time, we would ask policymakers to articulate the insights that they most want from the social sciences. In other words, what are the situations in which they would most value the knowledge that social and behavioral sciences produce? Are there situations in which social and behavioral science methods and findings can help policymakers avoid ineffective or counterproductive policies and programs while crafting more effective ones?

An example of the type of outcome that this process could produce pertains to “regression to the mean”—i.e., the long-known tendency in social science research for scores at the extreme high or low ends of a distribution to “regress” toward the mean on subsequent measurement owing to chance factors alone. Researchers are aware of the potential illusory effects of regression to the mean. Policymakers, however, often want to help those at the most extreme ends of the curve, such as students in the very lowest performing schools. As a result, laws may be enacted and programs created whose apparent effectiveness is an illusion. For example, a school where students test at an extremely low level one year is likely to perform at a less-dire level the following year, regardless of any policy intervention, because of the tendency to move toward the mean. This could happen for a variety of reasons unrelated to the new program—for example, if several particularly difficult students leave the school or some high-performing students join. Those unrelated changes, however, could produce higher average scores for the school, leading the policymaker to see an apparent improvement and attribute that improvement to the intervention. Researchers know that there are methodological design and statistical analytic techniques to guard against this error, but again, policymakers are not only unaware of the problem of regression itself, they are also likely to have very little knowledge of research design or statistical techniques.

Helping decisionmakers who care greatly about problems and the people affected by them to craft policies that are not prone to this and other comparable errors could help direct more resources to policies that have real effects and avoid policies with spurious or even harmful impacts.

As a second and related example, it might be very useful to provide policymakers with practical tools to understand how the findings of randomized, controlled trials can or cannot be appropriately transferred from one setting or application to another. In general, a collaborative effort among researchers and policymakers to first identify, and then more effectively communicate, methods and ideas for greater policy effectiveness and efficiency is one route to increasing the public value of social and behavioral science knowledge that already exists.

Develop a comprehensive and outcome-oriented entity to create more effective communication strategies. This entity would not just produce content, but also commit to evaluating and making public the relative effectiveness of different science communication strategies. When we discover social and behavioral science knowledge that has the potential to benefit the public, the realization of that potential will depend on the effectiveness with which the information is conveyed. We should develop a means of producing and disseminating such information that makes use of many modes of communication. These modes can include printed products, electronic publication and distribution, and, possibly, a new online journal directed to an audience of both policymakers and social and behavioral scientists. We should also consider developing practical handbooks or, perhaps better yet, massive open online courses (MOOCs) to communicate with the widest possible audience. Another possibility would be developing apps that employ decision trees, algorithms, or augmented analysis and decisionmaking to help policymakers use what the panel develops. In all such cases, we should commit not just to developing content, but also to evaluating the extent to which our target audiences find it valuable. If this entity is linked to the researcher-policymaker collaboration described above, that group could advise it on how best to evaluate the impact of its activities.

This entity’s main target audiences would be policymakers, those who support policymakers, and those who seek to aid policy processes. If sufficiently effective and accessible, these resources could also be used as part of graduate training in the social and behavioral sciences—providing templates for researchers and organizations that want to deliver valuable advice to policy communities. This resource, if sufficiently effective, should also be incorporated into the orientation and other services provided to members and staff on the Hill and in other legislative and policy bodies. The key is to develop content that this population believes is necessary for them to achieve their ambitions. Another, much broader aspirational goal would be to make this information available to the general public as they seek to understand social problems and policies.

Create an independent, non-governmental resource to which policymakers can turn to have more personal and ongoing interactions. Policymakers can use this resource to obtain credible and objective information about existing or proposed programs and legislation. In contrast to the model used in the United Kingdom, where the government has a central office of behavior, we should consider an alternative approach: establishing an independent, external resource—like the Congressional Research Service or the Congressional Budget Office—to provide an expert, nonpartisan sounding board to which policymakers could turn for feedback about the likely social and behavioral consequences of current and proposed policies.

This is a subtle but important distinction from how things work currently. The proposed resource would not replace existing entities such as the National Research Council or other organizations that offer analysis of social challenges and policy options. Rather, we can augment and amplify the effects of such analyses by creating a way for policymakers to routinely gain assistance in thinking things through as a regular part of their own policy development processes.

As this entity responds to requests from legislative or administration policymakers, specific policy examples might be offered and specific findings of research would be presented, but the purpose of doing so would be to demonstrate how certain actions and consequences are related. This resource would not advocate for a specific policy for a specific problem.

Suppose, for example, that a policymaker is concerned about the consequences and costs of elevated high school drop-out rates. This resource would provide a venue for policymakers to learn about how social scientists have examined the issue, what attributes of the problem are most and least likely to be affected by various policy alternatives, and what mistakes or successes researchers have documented. If the resource could provide this type of information to policymakers in an accessible and actionable way, it could help them make more effective decisions.

Our purpose in this endeavor would be to transform how individual policymakers and their staffs understand and use directly relevant scientific methods, findings, and concepts in their thinking and actions. To make this project work, however, it is essential that we focus not just on how to educate policymakers about science but also to help social and behavioral science researchers better understand the situations that policymakers regularly face. This resource will be of value to policymakers only if researchers understand enough about policymakers’ needs to provide the kinds of information that policymakers can use.

Establish a series of presentations that are readily accessible to policymakers and staff on Capitol Hill and elsewhere in government. As the consensus panel does its work and identifies the key challenges and relevant insights and findings, we also need to take our findings and methods to where the audience is and show them what we know, how we know, and why it matters. The previously proposed journal, MOOCs, apps, and other mechanisms would all contribute to this, but we should also begin to have events on the Hill at convenient times, with food and other incentives to attract interested staff.

When I chaired the Energy and Environment Subcommittee of the House Science and Technology Committee, we initiated a series of what I called “Gee Whiz” presentations on the Hill. These were intended simply to highlight for staff and members of Congress the most interesting and exciting findings from Department of Energy scientists. The events were a huge success and drew increasingly large and very interested audiences.

There is no reason the social and behavioral sciences could not do the same. For these to be successful, what is presented at such events must be offered by compelling speakers—not merely people with impressive academic or research credentials—but strong, engaging communicators. These events must also address topics beyond esoteric subjects or psychological parlor tricks, and the presentations must not be laden with the usual “on the one hand, on the other hand, more research is needed, etc.” unless it is relevant, interesting, and meaningfully illuminates the topic. Topics and speakers must be tightly and strategically chosen and the information must be practical, have substantial magnitude of effect, and must speak to people on both sides of the aisle. And again, it must incorporate what we know about cognition, emotion, and behavior change.

For example, in the fall of 2013, the National Research Council organized an event in the U.S. Capitol that featured public benefits of social science. However, the event was not billed as such. Instead, we framed the proceeding as “How Social Science Saves the Government Money.” The event was framed this way to reflect the needs of the target audience—in this case, staff who could gain a type of knowledge that members of Congress could then use to benefit their constituents. The event featured leading scholars from several disciplines and former members of Congress from both major political parties. The presenters delivered sharp and cogent examples of how the social sciences transformed the provision of health services, enhanced the effectiveness of military strategies, and increased the efficiency of environmental programs. Instead of engaging an audience of congressional staffers in abstract conversations about science, the presenters highlighted how science could help them do their jobs more effectively. If done right, these presentations should become the kinds of events that members and key staff look forward to and make a point of attending because they value the intellectual stimulation and the practical policy implications.

Develop and implement a parallel media communications plan, based on social science research, to enhance public awareness of social science methods, findings, and impacts. In other words, social and behavioral scientists need to use what we know to communicate how we know and why it matters. If a behavioral and social science method or finding does or could change the world for the better, but no one who makes policy knows that, why would policymakers support the science that produced it to begin with?

In response to a general lack of awareness among policymakers of the many potential and actual contributions of the behavioral and social sciences and a devaluing of social science research, an independent funding source should develop a communications campaign directed toward increasing awareness, understanding, and support for the social and behavioral sciences. This campaign would incorporate principles and findings of the behavioral and social sciences to maximize effectiveness. The initial focus would be on policymakers inside the Washington Beltway, but consideration would be given to a broader public market so that average citizens will be better informed about the social and behavioral sciences and their value.

As one example of how such a campaign might be developed using behavioral science principles, the “Trans-theoretical” or “Stages of Change” model suggests that there may be merit to an initial messaging strategy designed to move people who may be at the pre-contemplation level to contemplation of the methods and benefits of the social and behavioral sciences. Based on this model, several striking examples of proven applications could be highlighted, with the initial focus not on the specific findings, but on the methodologies and disciplines that produced them.

For instance, a media campaign might use the following: “What is the best treatment for PTSD and how effective is it? How do we know?”

Another message, perhaps at one of the Metro stations leading to National Airport, might include an image of the 1982 jet crash in the Potomac with a caption: “This tragedy has not been repeated, and your air travel is much safer today because of fundamental research. What changed?”

As a third example, “With no change to the tax code and no new government expenditures or mandates, millions of Americans are saving billions of dollars more in their retirement accounts. Who figured out how to do that?”

Ideally, these or other messages should be tested empirically and compared with other messages and media. If they are shown to be effective, they would be deployed strategically through media and locations identified by research and with expertise and evidence from communications firms.

As part of a comprehensive strategy, these messages could be used to drive interest to further information, or they might be used as part of a series in which the first messages move from pre-contemplation to contemplation, with subsequent messages moving through other stages of change toward the desired end of greater awareness and support for social science research. The goal would be to craft messages that reach out to different audiences in different ways so that each can, in its own way, recognize that the social and behavioral sciences can help contribute to better outcomes, financial savings, and more effective and efficient policy.

Our disciplines have established a body of methods, findings, and knowledge that is directly applicable to a host of public policy areas. The task before us now is to turn that “applicable” into “applied” in ways that benefit our society and demonstrate the value of our disciplines to policymakers. Whereas this article has suggested several ways to go about that endeavor, there are undoubtedly many other possibilities. What matters most is that we consider a number of options and then put in place a strategic plan to implement the initiatives that seem most promising.

Oh, the Humanities!

One of the side effects of the financial implosion of 2008 has been an explosion of books bemoaning the demise of the American model of liberal arts education. Given that a college degree is the sine qua non for membership in the national “elite,” it should not surprise us that the economic shock has provoked a reexamination of the institution that seems to provide the most reliable intellectual and social capital for succeeding in tumultuous times. Previous surges of concern about the mission and structure of undergraduate education have tracked with similar periods of disruption and fluidity: rapid industrialization and immigration after the Civil War, the Great Depression of the 1930s, and the vast expansion of the middle class after the Second World War segueing into the schisms of the 1960s. Analysis of the critical gateway to the leadership class—who attends, who teaches, who pays—is a way of charting the winners and losers in the scrum for wealth and influence.

Large economic and demographic shifts not only disrupt the distribution of wealth and power, they also force a recalibration of social values pertaining to things like mobility versus stability, material acquisition versus frugality, even grace and humility versus striving and self-promotion. Most writers on higher education believe that college is the proper moment to push youth towards a consideration of such big questions of value, meaning, and purpose; they further view the curriculum as providing the tools for meaningful contemplation of a life well lived. High enrollments in economics classes, but low in literature, lots of computer science but no anthropology: in uncertain times, books on higher education scrutinize these trends like tea leaves in the hopes of understanding the kind of society we are becoming.

excellent-sheep-9781476702735_hr

Our twenty-first century institutions of higher education must contend with spiraling costs, increasing class size, competition from massive open online courses (MOOCs), and the desertion of humanities majors in favor of “pragmatic” subjects such as business, statistics, or economics. Those are simply the intra muros problems; outside the walls are toxic political partisanship, a severe contraction of support for public goods (including education), and the return to a level of wealth inequity not seen for a century. Not surprisingly, then, the current analyses cover, to a greater or lesser extent, the interconnections between liberal arts education, the preservation of a healthy democratic process, and the restoration of a just and equitable society. The perceived importance of these concerns is clear from the big guns crowding the field: representative titles from the last six years include Higher Education in America (Derek Bok, former president of Harvard) Beyond the University: Why Liberal Education Matters (Michael Roth, president of Wesleyan): Higher Education in the Digital Age (William Bowen, former president of Princeton); Higher Education? How Colleges Are Wasting Our Money and Failing Our Kids (Andrew Hacker and Claudia Dreifus); Unmaking the Public University: The Forty-Year Assault on the Middle Class (Christopher Newfield); Not for Profit: Why Democracy Needs the Humanities (Martha Nussbaum); The Marketplace of Ideas: Reform and Resistance in the American University (Louis Menand); and numerous essays and reviews by the eminent historian Anthony Grafton.

Simply put, a liberal arts education (or else the study of the humanities: not the same thing, though often treated as if it were) is seen by most of these writers as an essential training ground in democracy. (Yet most of the books do not address the demise of civics instruction in elementary and secondary schools, which seems to me a more important question.) The exact ingredients of the secret sauce vary by author, but they generally agree that small classroom discussion, under the guidance of an engaged instructor, creates the set of mental habits we call civic virtue: a spirit of inquiry and critical thinking, altruism, and service. Practices of good citizenship are reinforced through interaction with students from diverse racial, economic, and cultural backgrounds, who gather as equals within the classroom by virtue of need-blind, merit-based admissions. Equally important is individual self-knowledge, born of quiet, contemplative learning, which is (turning outward again) a critical component of empathy. These ideals are under assault by economic forces. The commodification of the educational experience means more money for facilities and less for teaching. The drive for profit and efficiency means larger class sizes, contingent labor too ill paid to provide mentoring, and the unholy combination of virtual classrooms and robo-grading. A weak economy has strengthened the instrumental notion of higher education (finding a good job) at the expense of the moral or spiritual one (becoming a wise/good/happy person). Finally, the declining fortunes of the middle class (not to mention rising poverty) and the scarcity of affordable loans means we are on the way to a two-track system of higher education: the top quintile mingle only with each other in the best schools; the rest are relegated to poorly funded institutions of mass education, assuming they pursue higher education at all.

Delbanco_College

Andrew Delbanco and William Deresiewicz stake out the same corner of this gloomy landscape: the changing nature of undergraduate education at “elite” institutions. The authors know whereof they speak: both were educated entirely in the Ivy League, Delbanco at Harvard and Deresiewicz at Columbia. Delbanco is a leading expert on Melville with tenure at Columbia, one of the few remaining schools that insist on familiarity with the Western canon as the basis of cultural literacy. Deresiewicz was an Austen scholar at Yale until leaving the academy after being denied tenure. His subsequent career as a pundit began with a shot across the bow of the ship that cast him adrift, in the form of a 2008 essay “The Disadvantages of an Elite Education.” Sheep is that essay padded out to book length. (When an advance excerpt of the book appeared last summer it became the most forwarded article ever published by The New Republic.) Since they are writing are about institutions in relatively robust financial health, many of the aforementioned economic stressors are not discussed. But the issues that concern the authors most—the looming triumph of the instrumental notion of education over the moral and spiritual one, and the weakening diversity of the student body—derive from, and in turn affect, political and economic policy. And so the two English professors venture into social commentary, by asking important questions about the complexion of the elite undergraduates who will presumably become our future leaders. Does the way we teach them lead to contemplation, empathy, and respect for those less fortunate than ourselves? Or are we breeding entitled, driven self-promoters who will only reinforce barriers to economic mobility in order to preserve their own status? The authors hold out some slender hope that a more just and equitable society can be restored, if we can return humanistic education to its former place of grace.

For Delbanco, the etiology of the current crisis is complex, but generally stems from the gradual supersession of the traditional college, with its mission of mentoring students towards ethical adulthood, by the modern university, with its emphases on research, publication, and technology transfer. This death spiral is in turn the product of very broad secular trends such as industrialization, specialization of knowledge, and the growth of the sponsored research complex. Deresiewicz’s root of evil is much narrower and more recent: over the course of the last few decades, the admissions process for elite schools has grown so absurdly competitive and unforgiving that today’s elite college students are risk-averse, conformist, and permanently scarred by the “toxic levels of fear, stress, and anxiety” that dominated their childhoods. The admissions process has created an elite class of damaged souls incapable of growing into empathic maturity. Midway through Sheep, Deresiewicz discusses his own unhappy childhood in a family of overachievers, and we begin to realize that there is a much better book—a memoir—buried inside the existing one. But his personal trauma does not excuse the extravagantly nasty tone (students are described as “too stupid” or “entitled little shits”) and gratuitous allegations of conspiracy (upper middle class parents teach their children contempt for the disadvantaged in order to preserve their own class advantage). Other reviewers have rightfully slammed Sheep for these things, and rightfully praised College for its nuanced and deeply sourced approach. Virtually all of my misgivings in the following paragraphs apply much more strongly to Sheep than to College; the flaws of the latter are more forgivable in an overall context of goodwill and careful consideration.

The language used by the authors to describe the value of the humanities is quasi-religious, and consciously so. Deresiewicz talks a great deal about college as the key moment for building a “soul;” Delbanco cites the tradition of educating the “whole man” (a term closely associated with his alma mater Harvard). Both fully subscribe to the idea that, in our contemporary secular society, the innate human need to contemplate big questions of meaning and purpose is now fulfilled by the humanities. Yet their assumption that we live in a secular society is parochial. How could it be that neither author considers the resurgence of overt religiosity in American society since the Reagan era? Given that our late twentieth-century religious revival tracks neatly with the worrisome decline in humanities enrollments, might the authors have considered a relation between the two trends? Is it not possible that the need for contemplation of meaning—which elites can pursue in expensive universities—has once again become the province of the churches, for at least a considerable portion of the population? What does it say about the authors’ true commitment to empathy and diversity that they ignore this major social trend?

The authors’ shared background in literature also places them in another echo chamber: of the current crop of books on higher education, the great majority are written by humanists. At best, one gets an occasional outlier from law (Bok), or sociology (Hacker); not a single one was written by a natural or quantitative scientist. Indeed, why would one write such a book, when the sciences are widely perceived as beneficiaries (or at worst unscathed bystanders) of current trends? It is the humanities in crisis, and the humanists framing the issue. This may be natural, but it is also a problem.

Two flawed arguments at the core of these books reflect the authors’ disciplinary limitations. The first, about the necessity of the humanities in producing the type of empathic wise leaders critical to a healthy society, implies that the pursuit of the other subjects cannot fulfill this function. The authors believe that a student’s experience of guided yet open-ended discussion of big life questions—ethics, purpose, meaning, and so forth—is a critical rehearsal for wise leadership. This is not incorrect, but it assumes that the production of good citizen-leaders is due to the particular content, rather than the more general process of learning. Is it not possible that collaboration on a science project provides equally valuable practice in give-and-take? Or that passionate inquiry of any sort leads us to a satisfied sense of purpose, which, in turn, makes us kind and empathetic? Furthermore, the idea of a bright line between modes of knowing—with certainty and measurement on one side, and intuition and creativity on the other—ought by now to be obsolete, if not banished. Think of the psychologist Daniel Kahneman, who balances his experimental observation of decision making with a profoundly sympathetic contemplation of our human need for meaning. Or perhaps the work of the French economist Thomas Piketty, second to none in the marshaling of economic data, yet tempered by anecdotal gems from Balzac or Austen, and suffused with compassion and wisdom about human behavior. Finally, the idea of the unique civilizing function of the humanities is an uncritical rehash of the famous “two cultures” debate that roiled the chattering classes in postwar Britain. That discussion was nominally about the changing fortunes of various fields of scholarship and the value of different modes of knowing. But as was well understood at the time, it was also an expression of the power struggle over the kingdom’s future leadership: “gentlemen” educated predominantly in the humanities, versus those educated in engineering or “civic science” in the burgeoning twentieth-century “redbrick” and “plate glass” universities. The idea that the humanities have an irreplaceable character-building function comes with the baggage of retrograde class snobbery; at the very least the baggage should be unpacked.

Shopworn class myths imported from across the Atlantic are also the source for the second problematic thesis at the core of both books: that our hollow rhetoric of meritocracy perversely obstructs rather than supports class mobility. When Delbanco and Deresiewicz describe the barriers faced by lower income students in pursuit of higher education, they do not simply mean the pincers of spiraling costs and inadequate financial support, nor the very high costs of “enriching” extracurricular activities that have become a prerequisite for admission to elite colleges. In their indictment, the implementation of “merit-based” admissions has turned elite schools into leading propagandists for a pernicious ideology of entitlement. In a kinder, more genteel age, our country was led by the WASP elite, who, whatever their faults, were at least imbued with a sense of obligation to those less fortunate. They recognized that their advantages came to them as lucky accident of birth, rather than through innate superiority, thus it behooved them to “give back” to the nation through public service. And, in the middle of the twentieth century, the WASP elite had the good of the country at heart when they opened the gates of admission to talented but non-elite students (chiefly the children of Jewish immigrants). In contrast, argue Delbanco and Deresiewicz, today’s elites believe that they have achieved their advantages by dint of their own hard work and superior abilities. By corollary, today’s disadvantaged are held back not by structural barriers or ill luck, but by their own lack of ability or drive. Thus societies with a dominant ideology of meritocracy are paradoxically much harder on the lower strata, who must bear the resentment of their betters, and the shame of their own “failure.” Elite schools are the high temple for this ideology, and unless our future leaders are taught differently, our hopes for a more just and equitable society are doomed.

To be clear: I share the authors’ revulsion for unthinking entitlement, and our meritocracy is clearly not working as well as it might. But to place the blame on the schools rather than on, for example, the dynamics and ideology of capitalism, is like the blind humanist mistaking the leg of the elephant for the whole. Furthermore, the authors’ nostalgia for the noblesse oblige of the old WASP elite is a dubious notion, based on narrow reading in the same few sources (indeed so similar is their use of sources, down to identical quotes, that “echo chamber” hardly begins to cover it). These are Owen Johnson’s 1912 novel Stover at Yale; E. Digby Baltzell’s sociological treatise The Protestant Establishment (1964); and the 1958 novel credited with coining the word in question, The Rise of the Meritocracy, 1870-2033. This last was a dystopian vision of future British society oppressed by an arrogant, hyper-rational caste of test-selected bureaucrats; eventually the lower classes, in coalition with the leaders’ more compassionate wives, revolt against the elites. The author, Michael Young, was an academic-cum-public servant involved in the same postwar British debates on higher education (and not coincidentally from the same lower-middle class background) as C.P. Snow and the popular historian J.H. Plumb. The story about meritocracy leading to a society of selfishness and entitlement turns out to be just that, a story—an artifact from another time and place that we should scrutinize rather than simply cite as evidence. At this point, even the most ardent defender of the humanistic values of imagination, intuition, and elegant writing might be forgiven for wanting some cold rational data.

The Battle for the Soul of Conservation Science

Annual scientific gatherings can be sleepy affairs, with their succession of jargon-laden PowerPoint presentations. But there was a nervous buzz at the start of the 2014 conference of the Western Society of Naturalists in mid-November, in Tacoma, Washington. The first morning would feature two titans of ecology squaring off over the future of conservation.

It wasn’t billed that way, and neither man wanted to cross swords in a public forum. But the expectant crowd knew that Peter Kareiva, the chief scientist for The Nature Conservancy (TNC) and Michael Soulé, a founding father of conservation biology, had become unlikely adversaries in the past few years.

Their fight, which has divided the ecological community, centers on whether conservation should be for nature’s sake or equally for human benefit. Strong voices in both camps have joined the fray and triggered a war of words in journals and on op-ed pages. Some of it has turned ugly. A week before Soulé and Kareiva would face off in front of 600 young ecologists (many of them still in college) at the Tacoma conference, an article calling for unity was published in the journal Nature.

“Unfortunately, what began as a healthy debate, has, in our opinion, descended into vitriolic, personal battles in universities, academic conferences, research stations, conservation organizations, and even the media,” the piece lamented. “We believe that this situation is stifling productive discourse, inhibiting funding and halting progress.” The commentary was authored by Heather Tallis, lead scientist for The Nature Conservancy, and Jane Lubchenco, a distinguished marine ecologist and former head of the National Oceanic and Atmospheric Administration during the first term of the Obama presidency. More than 200 environmental scientists added their names as signatories.

Soulé and Kareiva did not fan the flames of this acrimonious debate during their appearances in Tacoma. They skirted the fault lines that were shaking the foundations of their field. But Kareiva at one point alluded to the controversy. “There’s a dialogue going on now,” he said, vaguely. It is about “how useful our science is and what we’ve been doing.”

Actually, that’s the dialogue Kareiva wants to have. He wants the discussion to be about how nature is getting reshuffled in our human-dominated era (what some refer to as the Anthropocene) with its global transformation of landscapes, oceans, and the climate, and how this requires new conservation tools and approaches. The old ways of protecting nature, which many of his colleagues still swear by, aim to keep nature separate from humans. This is misguided, Kareiva has argued, and also untenable on a planet of seven billion people. He challenged the audience of young ecologists to think outside the box of traditional conservation.

Soulé, however, wanted to keep them focused on a familiar model. “Ecologists like national parks because it’s the only place where large predators survive, and only where large predators survive is where biological diversity is rich,” he said.

The old ways of protecting nature, which many of his colleagues still swear by, aim to keep nature separate from humans. This is misguided, Kareiva has argued.

If this is true (there is considerable disagreement on that assertion), then what of all the nature that exists outside the boundaries of a remote national park, protected wilderness area, or wildlife refuge? What of the nature in suburban backyards, urban green spaces, farms, and ranches? Is that less desirable and less meaningful to ecologists? Kareiva doesn’t think so, but Soulé’s preferred model—the dominant model in conservation—has boxed in ecologists. It has narrowed how they view nature and it has narrowed their options for protecting it.

These are issues that the future ecologists at the Tacoma conference were already wrestling with. They know their field is at a crossroads. Their leaders were wrangling over how best to preserve the last vestiges of the natural world on a domesticated planet. The future of conservation was up for grabs. Some of the key visionaries were on the stage, in the form of Kareiva and Soulé. But what future were they pointing to?

Conflicting science, conflicting values

Three decades ago, Michael Soulé was at the forefront of a battle to save nature from humanity. He and other ecologists had begun to articulate the concept of biodiversity as a focal point in conservation. In 1985, Soulé published a seminal essay, called “What is Conservation Biology?” The article helped define the then-emerging field of ecological research and application. It was an ethically imbued science with an underlying precept: plants, animals, and ecosystems had intrinsic value. This biocentric ethic called for nature to be protected from human activities, which, as Soulé wrote, had unleashed a “frenzy of environmental destruction” that “threatened to eliminate millions of species in our lifetime.”

In the mid-1980s, as Soulé began laying the groundwork for a new professional organization—the Society for Conservation Biology—Peter Kareiva was immersed in fieldwork studying the dynamics of predator-prey insect populations. Kareiva had just joined the zoology department at the University of Washington and had started trekking out to Mount St. Helens, five years after its volcano erupted. He watched new ecological life slowly emerge on the denuded, lava-scorched landscape. This frontline view planted a nagging thought in Kareiva’s mind: perhaps nature, which green rhetoric often depicts as fragile, was more resilient than he and his colleagues realized.

Other environmental matters beckoned, however. Many of Kareiva’s fellow ecologists felt that nature was under siege—from unchecked mining, logging, fishing, and the whole sprawling footprint of human development—and they joined the fight to preserve biodiversity. Kareiva, too, was soon drawn to the battlefront. In the early 1990s, he testified as a lead witness for environmentalists who sued to curtail logging in large swaths of old growth forest in the Pacific Northwest. The media dubbed it the Spotted Owl War, because greens used the bird—and its nesting habitat—as a symbolic and legal lever. Kareiva’s testimony in the case helped protect the spotted owl from human encroachment—just as conservation biology’s ethic of intrinsic value had called for—but he was discomfited in the Seattle courtroom by all the loggers sitting quietly in the rear, many with their kids on their laps. The fathers held placards that read: “You care more about owls than my children.” That sight stayed with him.

Over the next two decades, Soulé and Kareiva—who has been TNC’s chief scientist since 2002—would be occupied by the same concern—the erosion of functional ecosystems that supported a diverse array of species.

Yet their journeys as defenders of nature have led them down different paths. At the outset of his talk during the Tacoma conference, Soulé, now in his mid-70s, seemed perplexed by this turn. “That’s the irony of this particular discussion,” he said. “We all want the same thing. We want a good life, we want to be happy, and we want to protect biodiversity.”

The problem, he went on to suggest, is that everyone wants a good life—meaning the rest of the world. “The more people there are, the wealthier they are, the more they consume and pollute, the less opportunity there is going to be for other life forms on the planet,” he said.

Soulé, it’s worth noting, got his Ph.D. at Stanford in the 1960s, where he studied population biology under Paul Ehrlich, an influential early voice in the contemporary environmental movement. Ehrlich’s best-selling 1968 book, The Population Bomb, prophesied global eco-doom if the world’s population was not significantly reduced. Concerns about overpopulation framed the green discourse for a generation. When Soulé laid out his manifesto for conservation biologists in the 1980s, he portrayed humanity as the wrecking ball laying waste to earth—and what was left of wild nature.

He still feels that way. “This is not a great time to be a conservationist,” he said glumly to the future ecologists assembled in Tacoma.

Kareiva is neither pessimistic nor sunny about the state of the world. To him, it just is what it is. He doesn’t downplay threats to biodiversity, but he is tired of the unceasing gloom-and-doom narrative that environmentalism has advanced for the past quarter century.

He also believes that the eco-apocalyptic mindset has infected the field of conservation biology with an unhealthy bias. Sometimes, he says, science paints a different picture than that which conservation biologists want the public to see. “I have been an editor of major journals for thirty years, handling papers on migratory bird declines, salmon, marine fisheries, extinction crises, and so on,” he told me. “An article that confirms doom is never critiqued. Any article that reports things are not so bad gets hammered. It is very discouraging to me.”

He recalls one particular episode regarding a paper published twenty years ago in the journal Ecology. Its finding contradicted widely held assumptions that neotropical warblers were declining. “It was reviewed unprofessionally and viciously because folks worried it would undermine efforts to reduce tropical deforestation. I have seen this over and over again.” The conservation community, he says, “is plagued with an astonishing confirmation bias that does not allow questioning of anything.”

The field’s premier journal, Conservation Biology, was rocked in 2012 by similar charges of politicized interference when its editor was fired after she had tried “removing advocacy statements from research papers,” as an article in Science reported.

It was around this time that Kareiva and some of his colleagues began calling for new approaches to conservation. In an essay published in BioScience, he and Michelle Marvier, an ecologist at Santa Clara University, wrote: “Forward-looking conservation protects natural habitats where people live and extract resources and works with corporations to find mixes of economic and conservation activities that blend development with a concern for nature.”

Leading figures in the ecological community were aghast. The essay explicitly challenged Soulé’s founding precepts for conservation biology, which established the field as a distinctly nature-centric enterprise. It was not intended to accommodate human needs or corporate interests. In a rebuttal published in Conservation Biology, Soulé characterized Kareiva and Marvier’s view as “a radical departure from conservation.” We humans, he wrote, “already control more than our fair share of earth’s resources…. The new conservation, if implemented, would hasten ecological collapse globally, eradicating thousands of kinds of plants and animals.”

Kareiva is a lightning rod for criticism because of his high profile position at The Nature Conservancy, which is the largest and richest environmental organization in the world. He is also outspoken. In one public talk, he marveled at nature’s ability to rebound from industrial disasters, such as oil spills. He wasn’t condoning such actions; he just thinks that in some cases his peers conveniently overlook an ecosystem’s resilience because it contradicts the fragile nature narrative that has shaped environmental discourse and politics. Additionally, Kareiva has come to believe it is better to work with industry than against it—so as to influence its practices. (This is what TNC has done of late, in partnering with Dow Chemical and other companies on environmental restoration projects). “Conservation is not going to succeed until we make business our friend,” he has said.

The more Kareiva talks like this, the angrier he makes some of his esteemed peers. They have already been on the warpath. In 2013, Soulé, along with Harvard biologist E. O. Wilson and others, sent a letter to TNC President Mark Tercek, complaining about Kareiva. They slammed his views as “wrongheaded, counterproductive, and ethically dubious.”

The onslaught has not let up. Last year, an article in the journal Biological Conservation by Duke University ecologist Stuart Pimm likened Kareiva to a prostitute doing the bidding of industry.

The recent commentary in Nature, with its 200-plus signatories from the ecological community, sought to cool passions and tamp down the debate’s derogatory tone. The authors pleaded for “a unified and diverse conservation ethic,” one that accepts all philosophies justifying nature protection, including those based on moral, aesthetic, and economic considerations. They asked for ecologists to look back to the historic roots of conservation for guidance.

The roots of biodiversity protection

In the early 1900s, when President Theodore Roosevelt was establishing national parks and wildlife refuges, ecology had not yet become a formalized science. People viewed the natural world from a largely aesthetic or utilitarian perspective.

John Muir, the Sierra Club founder who famously went camping with Roosevelt in California’s Yosemite National Park, worshipped nature. It was his church. “The clearest way into the Universe is through a forest wilderness,” he wrote in his journals. Roosevelt, an avid outdoorsman, venerated nature, too. But he also viewed it as a valuable “natural resource”—trees for timber, rivers for fishing, wildlife for hunting.

These two worldviews—valuing nature for itself and for human purposes—have long framed dual approaches to conservation.

By the 1930s, the chasm between the intrinsic and utilitarian perspectives was bridged by the forester Aldo Leopold. He advanced a more holistic perspective of the natural world, and believed that anyone who valued nature, irrespective of motive, should hold an ethic that “reflects an ecological conscience.” This was morally inscribed in his famous “land ethic,” which, for many, became a guiding maxim: “A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise.”

Two parallel developments at this time—one in the emerging science of ecology and the other in the U.S. wilderness preservation movement—combined with Leopold’s philosophy to shape attitudes toward nature and conservation for decades to come. Ecologists believed then that healthy ecosystems were closed, self-regulating, and in equilibrium. Disturbances, in the form of weather, fires, or migrating organisms, were not yet factored in, except when the disturbance was thought to be human-induced, in which case the prevailing belief was that the system was thrown off its normal balance.

These two worldviews—valuing nature for itself and for human purposes—have long framed dual approaches to conservation.

This model of stable ecosystems that needed to be guarded against human disturbance (such logic, of course, meant that humans must exist outside nature), gave scientific impetus to the cause of wilderness preservation.

Most ecologists have since discarded the “balance of nature” paradigm. But as the environmental writer Emma Marris noted in her recent book Rambunctious Garden, “The notion of a stable, pristine wilderness as the ideal for every landscape is woven into the culture of ecology and conservation—especially in the United States.”

In a paper he is readying for publication, Kareiva writes that the balance-of-nature paradigm has been “at the core of most science-driven environmental policy for decades.” But the paradigm goes deeper than just the science. American attitudes towards nature have been strongly influenced by iconic authors, from Thoreau and Muir to Leopold and Edward Abby, the grizzled nature writer whose books celebrated the stark beauty and loneliness of Southwestern desert landscapes. Many people looking to commune with nature go in search of transcendent outdoor experiences; they venture into a human-free landscape—the wilderness—to experience what seems to be nature in its truest, purest state.

This mindset took on added ecological value when concerns about endangered species came to the fore in the 1960s and 1970s. Designated wilderness and national parks—be they forests, prairies, or wetlands—helped preserve habitat for imperiled species. The sanctuary model extended itself further when conservation biologists in the 1980s began identifying the significance of ecological processes and a wider community of plants and animals. This new strand of ecology-based conservation had one key tenet: genuine nature, the kind that contains biodiversity, is devoid of people.

These Western-style ideas of ecological conservation were exported by ecologists, environmentalists, and policymakers who pushed for the establishment of national parks and nature preserves in Africa, Asia, and Latin America. It was the wilderness model of nature protection gone global. Yet numerous studies have shown that even as more parcels of land have been set aside around the world (equaling 10 to 15 percent of the earth’s land mass) global biodiversity in the protected areas continues to decline. How could that be?

In his 2009 book, Conservation Refugees, the investigative journalist Mark Dowie, who had been covering environmental issues for decades, reported: “About half the land selected for protection by the global conservation establishment over the past century was either occupied or regularly used by indigenous peoples.” Much as the loggers of the Pacific Northwest depended on the forests for their livelihoods, so had these local inhabitants depended on the now-protected lands to forage, hunt, or graze their livestock. The people were part of the ecosystem. Removing them had consequences.

In 2013, the International Journal of Biodiversity published a meta-review of national park case studies from Africa. It found that the creation of protected areas in African countries has resulted in the killing of wildlife “by local people as a way of protesting the approach.” There are other factors that have undermined the effectiveness of national parks in the developing world for protecting biodiversity, such as regional climate change and insufficient funding for oversight. But it is the “fortress conservation” aspect that has turned many people who had been living with nature into enemies of nature. As Dowie noted in his book, “some conservationists have learned from experience that national parks and protected areas surrounded by angry, hungry people…are generally doomed to fail.”

Embracing the Anthropocene

Last spring, Kareiva emailed me an intriguing paper that had just been published in Science. Researchers had sought to quantify the decline of species diversity in 100 localized, ecological communities across the world. Globally, there was no question, as the authors were careful to point out, that biodiversity was being lost. They had thus assumed that the global trend would be mirrored at the local level. “Contrary to our expectations, we did not detect systematic [diversity] loss,” the scientists wrote. What they found, instead, was much evidence of ecological change that altered the composition of species, but not its richness or diversity.

It’s the kind of result that many conservation biologists would probably find maddening. Kareiva, though, was fascinated by the implication. “Think about it,” he said. “If you live to be 50, one out of two species you saw in your back woodlot will have been swapped out for a different species—but the number of species would not have declined.”

This, he believes, is the flip side of the Anthropocene that ecologists need to consider. Most talk about the future morosely; they expect a huge chunk of the Earth’s biological heritage to disappear, which may well turn out to happen, and on the scale of our own lives may feel to us like a terrible loss. But that’s only part of the story, Kareiva says, the one that everybody dwells on. Rather, he wonders, what if we thought of the Anthropocene “as a creative event? What would emerge from it?” This is a striking departure from the conventional view of the Anthropocene as an eco-catastrophe, a kind of mass extinction event. Kareiva is not wishing for or welcoming such an outcome, but he does note: “Every other mass extinction led to a burst of profound evolution afterwards.”

This is a provocative, unsettling perspective. But Kareiva is not the only scientist thinking this way. In a 2013 commentary for Nature, Chris Thomas, a conservation biologist at Britain’s University of York, discussed the Anthropocene as a potential boon for biodiversity. “Populations and species have begun to evolve, diverge, hybridize, and even speciate in new man-made surroundings,” he wrote. “Evolutionary divergence will eventually generate large numbers of sister species on the continents and islands to which single species have been introduced.”

Other scientists and writers, including Emma Marris, have been talking enthusiastically about the creation of “novel” ecosystems in the Anthropocene. This view involves the acceptance of some invasive species as beneficial to biodiversity. It also involves an active human hand in molding ecological communities. At the Tacoma conference, Kareiva told the ecology students to think about their possible role in terms of “promoting the creativity of nature.” Where Soulé gushed about “love for nature” as his core value, Kareiva talked about a “sense of wonder” as his inspiration.

Where Soulé gushed about “love for nature” as his core value, Kareiva talked about a “sense of wonder” as his inspiration.

For sure, Kareiva acknowledged, the future was going to be tumultuous, especially with climate change bearing down on the world. Conservation in the Anthropocene would be challenging. “We may have to move species around, work with novel ecosystems and take some delight” in new hybrid species, he said to the young ecologists.

This is a bitter pill to swallow for Soulé and his generation of traditional conservationists. Near the end of his talk, he admitted how hard it was for people—even scientists—to accept new ways of thinking. “Science is always moving ahead, science progresses,” he said. “But that doesn’t mean scientists do. Scientists like me have an idea when they are about 20 to 25 years old, and that idea dominates the rest of their life and they never change their minds.” This was an indirect way of acknowledging that science and personal beliefs are intertwined. Forty years ago, the culture of conservation—and the science that supported it—was decidedly eco-centric, a worldview deeply influenced by green politics and philosophy. Now that there’s some kind of shift underway in the values and in the science, Soulé finds himself clinging to the world that shaped him.

But what he said next to the young ecologists in the audience indicated that he knew change was coming, and could accept what such evolution might bring: “Fortunately, natural selection abides in the wild—and in universities, so they are constantly bringing in younger, more mentally flexible scientists and that’s what I hope many of you become.”

Nuclear Power for the Developing World

Small modular reactors may be attractive in many developing nations. Here is a blueprint for how to build them efficiently and ensure maximum safety.

In the United States and much of the developed world, nuclear power raises deep misgivings among many decisionmakers and ordinary people. Concerns about safety have been rekindled by the Fukushima Daiichi nuclear disaster in Japan. There are also long-standing worries over proliferation and spent fuel management. And the technology has proven expensive: its high capital costs, combined with restructured electricity markets that place heavy emphasis on short-term economic gains, cheap natural gas in the United States, and the absence of a serious commitment to greenhouse gas emissions reduction, make nuclear power uncompetitive in many markets. The four new reactors being built in the United States today are in states that have vertically integrated power companies, where public utility commissions can approve the addition of the cost to the rate base.

But nuclear power is not dead. Seventy nuclear reactors are under construction worldwide. Twenty-seven of those are in China, ten are in Russia, and six are in India. With few exceptions, these new reactors are of the large light water type that dominate today’s commercial fleet, producing roughly 75% of the electricity in France, 20% in the United States, 18% in the United Kingdom, and 17% in Germany.

The same holds true when it comes to the development of new reactor designs. Some limited work continues in the United States, but efforts by its Department of Energy to rekindle interest among commercial players have seen limited success. Germany, once a leader in advanced reactor designs, closed its reactor development laboratories some years ago, ending all such research. Its labs now focus only on reactor safety for select advanced designs. However, China, India, Korea, and Russia continue to support vigorous development and demonstration programs.

As developed countries come to appreciate the magnitude of the effort needed to fully wean their energy systems off of carbon-emitting energy sources, there is a possibility that they will see a resurgence of support for nuclear power—presumably using safer and lower-cost technologies. In the meantime, the rest of the world will continue its present building boom and push on with the development of new designs.

Thinking small

Many proponents of nuclear power believe that the technology’s problems can be solved through innovation. Some have held up a vision of small modular reactors (SMRs), capable of producing 5 megawatts to 300 megawatts of electricity that would be manufactured on a factory production line and then shipped to the field as a complete module to be installed on a pre-prepared site. Proponents argue that factory manufacturing would not just reduce costs; it could also result in dramatic improvements in quality and reliability. Moreover, if these SMRs could then be returned—still fully fueled—to secure facilities at the end of their core life, the risk of proliferation could be better managed.

It is a lovely vision, but its realization lies decades in the future, if it is even possible. Estimates of the capital cost per megawatt of first-generation light water SMRs lie a factor of two or three above that of conventional reactors. Of course, since SMRs would be much smaller, the total cost would be much lower; hence, choosing an SMR would not be a “bet the company” decision. But few firms in the developed world are likely to be interested, absent a significant price on carbon emissions, or perhaps a new business model that incorporates other uses for a small-scale reactor (such as water desalination or hydrogen production) in tandem with electrical generation.

The same may not be the case across the developing world. If the cost of more advanced small modular reactor designs can be brought down, even to the range of conventional reactors, many nations may find SMRs an attractive way to meet their growing demands for electricity or process heat, and may find the smaller size more compatible with their smaller, less-developed electricity grids.

While the vendors involved in nuclear technology are responsible for innovating on the construction front to bring down SMR cost and construction duration, vendors and regulators share the burden of innovating on both the deployment and institutional fronts. A number of SMR mass deployment strategies have been proposed, ranging from business-as-usual to a build-own-operate-return (BOOR) strategy. Under business-as-usual, countries that choose to host SMRs would assume all responsibility for safety and the security of nuclear materials. Under a BOOR strategy, nuclear suppliers—perhaps backed by sovereign states and accredited through an internationally sanctioned framework—would provide, operate, and take custody of SMRs, thus assuming responsibility for the plant and all parts of the fuel cycle.

When questioned, even proponents of the BOOR strategy admit that, ultimately, nations that choose to deploy nuclear power plants must accept at least some of the responsibility associated with the technology. However, the strategy may be a way of reducing these responsibilities for customers who want clean energy, but cannot afford to fully build the technical and social institutions needed to responsibly manage nuclear power.

Regardless of deployment strategy, the institutional paradigm must change in a world with many SMRs. Host nations in the developing world could help, but, if this is to happen, delivering this change would mainly be the responsibility of national policymakers in nuclear supplier states, primarily China, France, Korea, Russia, and the United States, working within the framework of the international nuclear control regime. If coming decades do see a growth in SMRs across the developing world, three issues become critical: emergency response, liability, and proliferation.

Emergency response. Both light water SMRs and more advanced ones adopt a range of passive safety features. These are intended to reduce the probability of a major accident and, if abnormal conditions do develop, to increase the “coping time” available to operators to address the problem. Some designs eliminate on-site fuel handling; others rely on air-cooling instead of water-cooling, which reduces the need for elaborate plumbing and emergency power to cool the core after an accident. Some designs propose a fleet management approach where, as with many aircraft jet engines today, the reactor’s supplier can see everything an on-site control room operator sees. In an emergency, the supplier could provide advice to local operators, or even override local operators and take control. Nevertheless, the core of any SMR will contain highly hazardous materials. However remote the possibility, a major disaster could result in the release of significant quantities of these materials to the environment.

Few developing countries have, or are able to develop, the capacity to respond appropriately to a major accident. While commercial suppliers might adopt a BOOR approach, it seems most unlikely that they would include full-scale emergency response as part of the package. Suppliers backed by a capable sovereign nation, such as China or Russia, might supply a more credible capacity, but this does not solve the more general problem.

Liability. Efforts to develop a global liability regime, or to ensure that all reactors are covered by the arrangements that currently exist, must be accelerated. That said, if SMRs are to see mass deployment, alternative arrangements must be made for those smaller nations that cannot afford the liability caps that existing conventions prescribe. No global third-party nuclear liability regime exists. There are multiple conventions that states subscribe to, but given that some subscribe to none, substantial gaps exist in the current international framework. More than half of the world’s commercial nuclear fleet is not covered by any liability regime currently in effect. These reactors are in large countries such as Canada, China, and India that acknowledge that liability ultimately rests with the sovereign.

The main conventions at the moment are the Paris Convention, enacted in 1960, and the Vienna Convention, enacted in 1963. The Paris Convention, as updated under the Brussels Supplementary Convention of 1963, stipulates a liability amount of approximately $450 million. The Vienna Convention, as updated in 1997, specifies a liability limit of approximately $450 million. (The actual amounts under both conventions vary based on changes in currency valuations; the figures given reflect valuations as of mid-October 2014.) More recently, some efforts have been made to increase the liability amounts in acknowledgment of the potentially devastating effects of nuclear accidents. A revision to the Paris Convention was proposed in 2004 that would raise the liability amount to approximately $900 million (at current currency conversion rates), though this has yet to come in force. Also, the United States led an effort that in 1997 resulted in the establishment of a third convention, called the Convention on Supplementary Compensation that stipulates a liability of $900 million. In a major development, the Japanese Diet approved the ratification of this Convention in late November, which means it will enter into force three months after Japan deposits its instrument of ratification with the International Atomic Energy Agency. Only six countries have ratified the Convention on Supplementary Compensation thus far but, if it fulfills its promise of streamlining liability claims in the event of an accident, that will steer more countries towards signing and ratifying it.

Depending on the location of a potential accident—in other words, given the liability regime in effect—claims of damages can be filed against a reactor’s operator or its supplier, or against national authorities. Allegedly wronged parties in neighboring countries could file these claims as well, raising questions of which courts can exercise jurisdiction in which cases. Since these claims can involve thousands of cases and stretch into the tens of billions of dollars in the case of large nuclear accidents, commercial operators carefully investigate liability law in jurisdictions where they contemplate building plants. Suppliers and operators that choose to embark on plants in nations that neither subscribe to international conventions nor have well-developed national liability regimes are usually state-owned or state-affiliated enterprises in rich developed or developing nations. It is generally assumed that the lion’s share of the liability for an accident in such jurisdictions rests with the sovereign.

National nuclear liability laws vary greatly. For example, some countries do not hold nuclear operators strictly liable for nuclear incidents. The amount of money in different nuclear insurance pools differs, and some countries do not extend financial protection to cover grave natural disasters. Harmonizing liability law by convincing states to subscribe to a single convention eliminates some of the uncertainty that prevents nuclear operators from pursuing builds in certain countries, and precludes the sort of extended, high-level political discussions between governments that are currently necessary for exporter and host nations to commence a nuclear power plant project. They also increase liability amounts, cover a wider range of damages, and explicitly declare that “grave natural disasters” are no grounds for exoneration. Nuclear liability law has yet to be harmonized within the European Union, let alone globally, and movement toward this goal has been very slow. In all likelihood, it will remain so.

Some existing nuclear energy states have not ratified any of the conventions, including India, China, South Africa, and Canada. Most of the developing world has yet to ratify any. Efforts to modernize the nuclear liability regime have thus far involved steering countries toward ratification of a single convention. But even if this happens, some developing nations considering a nuclear program probably could not afford the liability amounts for which they would be responsible under any of the conventions, and especially the revised Paris Convention or the Convention on Supplementary Compensation. In the event of a major accident, these nations might well default. In addition to the sociopolitical and economic implications, such a default could place an even greater burden on institutions that provide development aid, diverting much-needed funds from investments in capacity building. Global conventions on nuclear liability must recognize that recovering from accidents involving SMRs will entail smaller sums of money than the hundreds of millions of dollars currently prescribed. Alternative liability arrangements must be made for developing nations that are seeking to deploy one or several SMRs, as opposed to multi-gigawatt conventional plants. We describe alternative arrangements later; regardless of the form they ultimately take, liability considerations should certainly be a part of any future SMR deployment agreements and should be codified in international energy policy.

Proliferation. If SMRs are to be fueled in the field, as will be required for virtually all designs now in advanced stages of development, there is a possibility that spent fuel could be diverted for use in weapons programs, or for the construction of “dirty bombs.” Also, the mass deployment of SMRs might open new pathways for proliferation that will need to be managed. For example, the potential growth of the nuclear-trained workforce will broaden the population of people who have a detailed understanding of this technology.

Some suppliers have dismissed this concern, arguing that proliferation is “a uniquely American preoccupation.” However, it would become an international concern overnight if diversion were ever to occur. In our view, it is far better to find a comprehensive way to address the problem now, than try to patch things up if a diversion occurs after many SMRs have been deployed under a business-as-usual scenario.

New tools and more resources are needed to assess and manage the risk of proliferation. This is true not only for SMRs, but also for the world nuclear enterprise writ large. A recent report by the National Research Council (Improving the Assessment of the Proliferation Risk of Nuclear Fuel Cycles) clearly articulates the serious limitations of all present assessment tools.

Until better tools are developed, there are three common-sense steps that could be taken to manage the risk of proliferation from the mass deployment of SMRs. First, the international community should urgently act to create a global control and accounting system for all civilian nuclear materials. This system must incorporate as many nuclear isotopes as possible, and it must be easy for inspectors from the International Atomic Energy Agency (IAEA) to access and query. Second, preference must be given to SMR designs that minimize the need for on-site fuel handling and storage—in general, the fewer times the fuel is handled, the better. And third, nations must recommit to tackling the waste question, by consolidating existing stockpiles or establishing permanent repositories. A global, internationally supervised approach to waste management, of the sort proposed years ago by Chauncey Starr and Wolf Häfele, is highly unlikely. The historic reluctance of the United States to cede any sovereignty in such matters, and the rapidly decaying relationship between Russia and the West, pose enormous challenges on this front. National or regional facilities may be possible, of course, though the danger always exists of rich neighbors coercing poorer ones into inappropriately hosting storage facilities.

Preparing for nuclear reality

Even given the challenges that remain, it is likely that many countries in the developing world may want to push forward with installing and operating SMRs. To better assist with and control such mass deployment of SMRs, new institutional arrangements are needed that would globalize standards regarding the type of SMRs that can be deployed and how to respond to potential accidents and reduce the probability of proliferation. We were able to explore alternative institutional arrangements at a workshop we organized in Switzerland with the International Risk Governance Council and the Paul Scherrer Institut. The workshop, which was supported by the MacArthur Foundation, brought together forty experts from eleven countries, nine SMR vendors, and all major nuclear supplier states.

As a first step toward this goal, a radical modification of the certification and licensing process must be developed and adopted. Many countries that could be interested in SMRs do not even have a nuclear regulatory authority. The movement in the United States toward certifying a design and then licensing site-specific modifications is welcome and provides a good starting point for streamlining the SMR deployment process. Unfortunately the U.S. Nuclear Regulatory Commission (NRC) is currently unequipped to assess any designs, especially non-light water ones, in a timely way.

If the industry takes every new idea to mean a protracted, expensive struggle with the regulator, it will instead design-out these innovations. To be sure, vendors with novel ideas must be prepared to defend these ideas. At the same time, regulators must acknowledge the nuclear innovator’s dilemma, and be equipped to step out of their comfort zone when evaluating designs. While many officials in the United States keep referring to NRC certification as “the gold standard,” many of the nation’s allies and rivals disagree with that characterization. And, if the agency does not develop the capability to assess advanced designs, it runs the risk of becoming less and less relevant as China, Korea, and others certify and market their own designs across the world.

Ideally, designs should first be certified and built in their home country. Another nuclear supplier state should then certify the design. Certification from regulators in two reactor-supplying states would assure inexperienced customers of the design’s viability. What is radical about this idea is that the host nation’s regulator would not undertake the design certification process itself, saving both the supplier and the host nation time and money. The staff of a newly established national regulator should engage in an intensive education program with the regulators who certified the design. The details of this process should be stipulated in multilateral agreements involving the exporting nation, the host nation, and the IAEA. Material generated during the original design certification process would be shared with the host nation’s regulator. Therefore, the relatively inexperienced host nation regulator would only be responsible for approving site-specific changes to the standardized design. This plan requires not only collaboration among national regulators, but also a permanent forum to facilitate and support the process: the IAEA should assume this role.

It is highly unlikely that the IAEA would be granted the authority and resources needed to certify SMR designs, though some developing countries might consider that a more credible stamp of approval than what we suggest above. Regardless of who certifies the design, in a business-as-usual world, vendors would be responsible for paying the cost of design certification, as they do now. The same would hold in a BOOR world, although granting the IAEA an expanded mandate under this regime implies that suppliers would have to obtain certification of good design and operational practice from the agency, for which they would pay an annual fee.

We believe that streamlining the certification and licensing process is as effective a course of action as can be achieved in today’s multipolar world. It would enable developing nations, including those countries that do not have the capability to certify a nuclear reactor design, to exploit civilian nuclear power in a much safer way. The alternatives include business-as-usual at one end of the spectrum, which constitutes a high barrier to entry and confines nuclear power to existing nuclear energy states, and at the other end a fully internationalized regulatory regime, which is highly unlikely given current attitudes to national sovereignty.

As a second step, the development of a robust international crisis management infrastructure is essential if SMRs are to see wide deployment. Momentum for such evolution has been growing since the Fukushima-Daiichi nuclear accident, which demonstrated that even developed nations need international support to respond to accidents. The need is exacerbated by the fact that SMRs might be deployed in countries that are challenged by human capital, organizational, and physical resource constraints.

The IAEA, or leading nuclear supplier states, must establish a far more effective accident evaluation and response team. This team should include a multidisciplinary group of experts in emergency management, diplomacy, nuclear power, risk assessment, and risk communication. The team would be responsible for a diverse range of tasks, including advising and assisting in the preparation of nuclear plants that lie in the path of anticipated natural disasters, coordinating the international response to nuclear accidents, and communicating with the public in real time in the event of such accidents. The latter requires the development of instruments that communicate the level of risk and the appropriate course of action depending on the emergency faced: the IAEA’s International Nuclear Event Scale is of little use in anything but a retrospective capacity.

The team would also need to maintain good relations with nuclear regulators and emergency managers throughout the world, which is why housing it within the IAEA, with its reach and influence, would be the preferred approach. And if it is not granted the power to requisition assets or deploy them from a purpose-built stock, it must dedicate staff to liaising with major powers’ armed forces, with leading providers of humanitarian relief, and with shipment and logistics companies. In the case of a nuclear emergency, the humanitarian response that would need to be mobilized is significant enough to overwhelm existing humanitarian aid organizations, and to divert substantial resources from other crises. The development of such purpose-built, fully funded international response teams would go some way to preventing this.

On the level of plant operators, it is imperative that the World Association of Nuclear Operators strives to achieve the level of information sharing, inspection technique development, and operator training that has been so successfully exhibited by the Institute of Nuclear Power Operations in the United States. The institute’s efforts have shown that safety and reliability can come before issues of propriety: information-sharing works in the interest of all plant operators, and thus of the nuclear industry and the public at large.

On the level of individual contracts, these should be preceded by multilateral agreements among SMR exporters, host nations, and the IAEA that explicitly address the need to create a level of emergency response capacity in the host nation commensurate with the level of risk created, through training in disaster risk management.

This proposed framework offers several benefits. It means that additional exports would require improved emergency response capacity, sustaining the relationship between exporter, host, and the IAEA. It would also facilitate both the standardizing of emergency response procedures and the updating of existing procedures as operational experience with SMRs increases. Moreover, the framework has the added advantage of working in both a business-as-usual world and a BOOR world. Finally, maintaining robust international and national emergency response measures would force the world to abandon the myth of absolute safety, and therefore complacency. As mentioned earlier, every nation wishing to purchase an SMR must accept some of the responsibility that comes with a nuclear power plant, and that includes developing a level of emergency response and crisis management infrastructure robust enough to cope with the effects of potential accidents, aided by the sort of international support that we have proposed.

Third, given the high liability amounts stipulated in existing conventions, the international community would be well advised to develop some form of shared international liability cap, specifically for SMRs, to address the smaller consequences of accidents involving these reactors and the enhanced level of safety they incorporate. It is worth noting, for example, that a reactor’s decommissioning funding allowance in the United States is based on the size bracket in which it falls (there are two). Although such an international approach is wise, we consider it unlikely to be adopted. Alternatively, national nuclear industries can force such efforts into being as each lobbies its government to share liability for their products with customer nations. Obviously, such lobbying efforts would be more successful if SMRs become competitive, and significant demand can be demonstrated from overseas customers.

As for funding these efforts, it is worth exploring the development of shared regional liability caps, or “endowments” to be managed by bodies set up specifically for this purpose, with their assets dedicated to responding to regional nuclear accidents. Many nations share grid infrastructure with their neighbors; regions are becoming electrically more interconnected. For example, since the United Arab Emirates plans to feed power from its reactors into a Gulf Cooperation Council grid, perhaps those nations that benefit from nuclear power while hosting no plants should contribute to mitigating the consequences of a nuclear accident in their region. The same might be possible in the East African Community or the Economic Community of West African States, should Kenya or Nigeria build an SMR. The level of each country’s contribution could reflect the share of the plant’s power output that it consumes. Alternatively, ex-ante bilateral agreements with powerful neighbors, or with the exporting nation, could take some of the financial burden off of the host nation, preventing financial ruin in the case of an accident.

Roadmap for institutional change

Three common threads interweave these issues. First, each of the above challenges requires a well-resourced and resolute IAEA. The agency currently lacks the resources and trained personnel to provide the level of supervision and oversight needed to sustain a safe and secure build-out of large or small reactors on the scale required to decarbonize the global grid. Many of the changes we propose will require vendors, operators, or sovereigns to pay a one-time or annual fee, either to support licensing and certification efforts or to support training of local responders, as well as a rapidly deployable international emergency response capability. In cases where the IAEA shoulders the burden of facilitating or supporting these efforts, it should receive appropriate compensation.

Second, smaller nations cannot afford the liability caps that existing conventions prescribe. Moreover, they are interested in smaller, safer reactors. Recovery from an accident involving an SMR will, in all likelihood, entail fewer resources than recovery from large reactor accidents. Any credible institutional arrangement will require the establishment and maintenance of either international or regional SMR liability pools, or perhaps both. This requires careful assessment of the willingness to pay of both host and exporter nations, and of the amount of liability that the private industry (through insurers and re-insurers) is willing to assume. Because that depends on many factors, ranging from the level of risk posed by an SMR (this differs depending on design, certification, and deployment strategy) to location, we suggest changing the focus from ultimately arbitrary “liability caps” to building and carefully managing endowments.

We recommend the establishment of an international SMR liability pool that must be paid into by host and exporter nations before an SMR is brought on-line. Deposits from intergovernmental and private entities would supplement these funds, as would annual deposits from SMR operators. The levels could differ depending on the risk posed by the SMR and deployment location and, if a region is organized enough to demand additional coverage, a similar regional endowment could be established to supplement the international one. Such collaboration is not unheard of. Sixteen Caribbean nations joined together in 2007 to form the Caribbean Catastrophe Risk Insurance Facility, which was developed and partially capitalized by the World Bank and the government of Japan. Other nations and organizations have also contributed to this trust fund, including Bermuda, Canada, the Caribbean Development Bank, the European Union, France, Ireland, and the United Kingdom. It is unclear if nuclear insurance policies would gain similar access to traditional and capital markets, and whether risk pooling would lower premiums to an extent that would justify the development of this facility, but it is an approach that should be explored.

Third, bilateral and multilateral initiatives are needed to improve regional and international collaboration, standardize procedures globally, and accelerate the development of infrastructure necessary to exploit nuclear power responsibly. It is easier to incorporate norms in overarching international conventions if a critical mass of countries already subscribes to them. SMRs perhaps represent the industry’s best chance of achieving this standardization. Building large reactors in emerging nuclear energy states requires decadal or multi-decadal collaboration between exporter and host nation on many fronts, from the political to the financial to the technical. For many emerging nuclear energy states, these acquisitions would be a once-in-a-generation undertaking, if at all possible. As such, the standardization process has been extremely slow. Smaller reactors that prove to be economically attractive, less complex, and shippable worldwide could alter this paradigm.

We have avoided proposing revisions that would require overarching international treaties, simply because we do not see the political will that would be needed to develop a new, comprehensive, and multilateral regime for the 21st century. Perhaps only a shock, such as another major nuclear accident or a serious proliferation incident, can generate that political will. For example, if there is a serious enough diversion of nuclear materials by a state or non-state actor, this might catalyze the development of a global, comprehensive nuclear material control and accounting system. Advocates of such a system have outlined its necessity for decades. If our assessment is correct, it is a poor reflection on the state of national and global affairs that only a nuclear disaster could galvanize such action.

Although it is not yet clear what multilateralism in a multipolar world will look like, it will probably be messier than today. Bottom-up approaches to harmonizing global standards and enhancing the control regime, despite their messiness, might hold the greatest likelihood of success. And, since it is highly unlikely that the United States, Europe, or Japan will become major SMR exporters, these players need to use what soft power they have to help craft as strong a nuclear control regime for SMRs as is possible. This is especially true now that relations between major nuclear supplier states are becoming increasingly frayed, especially those between Japan and China, Korea and China, France and Russia, and the United States and Russia.

There is an urgent need to raise living standards across the developing world. If SMRs cannot be part of a portfolio of future energy technologies, it is difficult to see how this can be achieved without a massive increase in future emissions of carbon dioxide. While the suite of energy sources needed to mitigate global emissions does not need to be identical everywhere, it does need to consist of low-carbon sources. It is highly unlikely that all but the richest nations of the developing world will seek to build and run large nuclear power plants. But with a few far-sighted and uniformly positive changes to the institutions that govern the technology, small modular reactors could prove to be a valuable part of the mix in some countries.

Ahmed Abdulla ([email protected]) is a postdoctoral research fellow and M. Granger Morgan ([email protected]) is a professor in the Department of Engineering and Public Policy at Carnegie Mellon University.

Has NIH Lost Its Halo?

After decades of strong budget growth, the National Institutes of Health now faces an increasingly constrained funding environment and questions about the value of its research.


Robert Cook-Deegan discusses his article.

For six decades after World War II, the National Institutes of Health (NIH) was the darling of Congress, a jewel in the crown of the federal government that basked in bipartisan splendor. It enjoyed an open authorization statute, giving it permanent authority to distribute funding without having to come back to Congress to regain that authority every few years. Appropriation hearings to decide the amount NIH would actually spend each year were usually love fests that lasted a week to a fortnight, with each institute and new initiative given its day in the sun. There were tensions and conflict, of course, and members of Congress and disease advocates were persistently disgruntled by NIH’s science-centric culture, perceived to lack urgency to cure cancer or confront the AIDS epidemic. But each year brought concrete accomplishments, examples of how federal dollars had advanced the conquest of disease. It was not just a Potemkin village, but stories of real progress against a common enemy of all humankind, be it cancer, heart disease, stroke, diabetes, Parkinson’s disease, Alzheimer’s disease, or childhood leukemia. The stories were simple and easy to understand, and there was truly a line from NIH research to clinical advances. And plenty of diseases, such as Alzheimer’s, on which all could agree something had to be done, and without research that something would never be clear.

A different disease might catch the congressional eye for a year or three, with boosts for that condition incorporated into a newly elevated budget baseline. Presidents routinely low-balled the NIH’s budget request to make room for their presidential priorities, knowing full well Congress would restore the NIH budget and throw in a bit more.

The result of NIH’s privileged status in Congress was nearly monotonic growth for six decades, punctuated by a few bad years, such as 1967-1969 when two NIH champions left Congress, as Lister Hill retired from the Senate and John Fogarty died, even as James Shannon turned over the reins as NIH director after 13 glorious years of expansion. It took a few years for Mary Lasker and other disease advocates to re-assemble their political coalition, but NIH resumed its expansion into and through the War on Cancer of the early 1970s. Even in the face of considerable controversy over how NIH should be structured and governed, agreement on budget increases was still possible. A few years of relative stagnation in the 1990s gave way to a budget doubling 1998-2003, spanning from the Clinton into the George W. Bush administrations. It mattered little which party controlled the White House or the houses of Congress; everyone is against disease and for research to rid the world of it.

Since that last doubling ended in 2003, however, NIH politics have changed. NIH received one more $10-billion dollop of stimulus funding in 2009-2010 as a swan song for Arlen Specter, honoring his long service, his battle with cancer, and his flip to the Democrats at a crucial moment as he fell off the rapidly eroding moderate edge of the Republican Party. But NIH’s stimulus funding was an anomalous blip in the past decade of budget stagnation. NIH’s purchasing power dropped by double digits after the 2003 peak, and even fear of disease does not seem to overcome the partisan gridlock that besets a Congress likely to be scored the most notoriously dysfunctional in American history. These days, the NIH appropriations hearings are a short and tiny shadow of their former grandeur. The appropriations process itself has largely been replaced by rolling continuing resolutions that extend the previous years’ policies with only incremental adjustments. The days of piling dollars onto NIH are long gone.

This relative neglect of NIH is despite having one of the most politically adept NIH directors in the agency’s history, Francis Collins, who has remarkable capacity for bridging the partisan chasm with folksy charm—buttressed by his guitar and motorcycle—a genuine passion for research and medical care, and talent for explicating biomedical science in human terms.

Are the changing politics a reflection of inattention specific to NIH—a diminution of its perceived importance to Congress or loss of public support—or is NIH merely suffering collateral damage from the larger and deeper paralysis of national government? Is the stagnation simply one among many consequences of polarization and political logjam? Are political undercurrents more permanently changing how federal support for all research will carry into the future? Or do the distinctive features of biomedical politics suggest that its future will be independent of the rest of the federal research and development (R&D) enterprise, as it was during the doubling era? And what might the answers to such questions mean for scientists and decision makers? One place to start is simply by noting that such questions are only now beginning to be asked by the NIH community, and very tentatively at that. The place to start is with an understanding of NIH’s political context, and the fact that the NIH budget rests on several tectonic plates, subject to different political pressures. Here I will explore the dynamics that seem most important for understanding what the future may hold.


Scale escalation

When the NIH budget was $700,000 going into World War II, it was easy to quadruple the budget to $3.4 million by the war’s end, and to boost it another tenfold by the early 1950s. Until the 1970s, the US economy was generally healthy, discretionary budgets floated on rising waters, and NIH got disproportionate increases, both relative to the government as a whole and in comparison to other research agencies (although defense R&D had spurts associated with the Korean War, post-Sputnik, and the War in Vietnam; and the National Aeronautics and Space Administration expanded rapidly in the 1960s to fulfill President Kennedy’s 1961 challenge “of landing a man on the moon and returning him safely to Earth”). From 1970 through 2003, NIH’s research funding consistently and significantly outgrew other federal research accounts (See Figure 1.)

Figure 1

Obligations for basic and applied research, 1970–2009

7 agencies_final

Source: National Science Foundation.

The rise of molecular biology and the continued efforts of disease-research advocates help explain this growth. The promise and practical import of the powerful new molecular and cellular biology were palpable, and Congress fueled their growth through generous NIH budgets. Moreover, NIH was the research arm of a behemoth—the Department of Health and Human Services (or Health, Education and Welfare before President Carter created the Department of Education). NIH began as a relatively small research agency with an ambitious mission in a large department, although after the mid-1960s with the creation of Medicare and Medicaid, most health expenditures through the department were entitlements, not from discretionary appropriations. NIH grew, but so did health expenditures. As a fraction of US health expenditures, the federal health research budget (of which NIH is by far the largest part) has hovered around 2% since 1980.

As NIH’s budget has grown to $30 billion annually—fully half of all civilian R&D expenditures—it has become harder to increase it without pinching other agencies. NIH is now larger than other Public Health Service budgets, so boosting its funding by 10 percent in an era of constrained spending overall would likely cause even larger percent cuts in other vital agencies such as the Food and Drug Administration, Centers for Disease Control and Prevention, Agency for Healthcare Research and Quality, and service components and block grants that are funded by annual appropriations. Appropriations to the Departments of Labor and of Education come out of the same appropriation subcommittee allocation, so NIH also competes directly in the congressional appropriations process with other, non-health programs, including some that are of vital interest to universities (such as the Department of Education’s Pell grants). More broadly, rising entitlement spending puts increasing pressure on all discretionary accounts. Given how well-treated NIH was before the current era of constrained budgets, it is hard to argue that the agency is more deserving of increases than other key agencies.

The annual cures and research breakthroughs, the truly impressive parade of Nobel Prizes and Lasker Awards, and the profusion of research articles that flow from biomedical research excellence tell powerful stories. But the novelty wears off, the frame becomes formulaic, and the hype becomes tiresome, if not defensive. Happy tales do not a healthy dog make. We did not lose the War on Cancer—far from it. Progress has been slow and steady, with some remarkable achievements in drugs for this particular cancer subtype or that. But we have hardly won the war, especially as mortality from metastatic cancer remains largely unabated. After three decades and tens of billions of academic and industrial research dollars poured into the amyloid cascade, there is still no known way to prevent, or even do much to mitigate the ravages of Alzheimer’s disease that threaten to increase inexorably over coming decades. Every year for the better part of a century, members of Congress have heard that cures are “just over the horizon” and that science is poised as it has never been to combat disease. And each year has added to the science base, bringing new scientific opportunities and possibilities for clinical application. Some cures have come, and one after another new technology has opened up new prospects. Knowledge does accumulate. Enthusiasm and novelty, however, do not necessarily follow. New technologies have helped feed rising costs, and the chronic conditions of an aging population grow more intractable. The stories of progress can all be true, but the arguments for larger budgets and more political determination lose their oomph after decades of annual repetition and continued health challenges.


Fractured constituencies

Mary Lasker and Florence Mahoney discovered a political strategy for using private philanthropic capital to leverage biomedical research funding from Congress in the years after World War II. Lasker remained a major figure in the biomedical research lobby until her death in 1994. The AIDS community, meanwhile, had shown how patient groups could be extremely effective at garnering research support, but the political process was becoming more complex. Hundreds of disease groups were by then following the same script that Lasker used to boost cancer research, lobbying to create institutes for their own conditions. And NIH institutes proliferated to respond to these constituencies. Some were for stages of life (childhood and aging), some were for health research fields that were said to be neglected (nursing and biomedical engineering), some were for medical conditions (arthritis, eye disease, communication disorders), and some were responses to scientific opportunity (the Human Genome Project). The biomedical lobby became more factious, more specialized, and harder to harness into a coherent movement for biomedical research as a whole. What was once a War on Cancer became coalitions for particular forms of cancer (leukemia and lymphoma, breast cancer, prostate cancer, “neglected” cancers), and even those coalitions have become fractionated. Breast cancer alone has previvors (those at genetic risk), survivors, and metavivors (those contending with metastatic disease). The subgroups all argue for increased research, but their priorities are not fully in alignment, and some have grown frustrated with NIH’s focus on research rather than cures. The day when just a few activists dominated the political scene has given way to coalitions and sometimes cacophony among research advocates. The number of organizations and their disparate goals diffuse political focus. The politics are more factious, with many constituencies finding their own congressional champions and channels of communication and even, as with the Congressionally Directed Medical Research Programs at Department of Defense, alternative agencies.

As NIH grew, so did the institutions it funded to do research. NIH-funded research, is an industry that sustains academic health centers throughout the nation—and fuels ambitions among every research university to attract a bigger piece of the pie. That industry sometimes behaves as political scientists predict, as an interest group, building national organizations and crafting political strategies to influence elected and executive branch officials in Washington. Academic health centers have expanded remarkably over the decades, and entire careers are devoted to biomedical research lobbying. With such institutionalization comes sclerosis, especially as the system was built on the assumption of infinite growth, and includes no options for responding to resource constraints.

Thus, as Bruce Alberts, Shirley Tilghman, Harold Varmus and Mark Kirchner noted recently in Proceedings of the National Academy of Sciences, biomedical research institutions have trained graduate students and postdocs for research careers that can only accommodate a sixth of their number. Hyper-competition and plummeting success rates are a result of a mismatch between research labor supply and demand. Although exacerbated by stagnant funding, these stresses were inevitable consequences of the system’s growth-dependent dynamics.

As Geoff Earle reported in The Hill, when NIH’s budget was up for discussion soon after its doubling, in March 2004, Senator Pete Domenici, a long-time supporter of NIH and passionate advocate for mental health, exclaimed in frustration:

    “I hate to say it, but the NIH is one of the best agencies in the world,” an angry Domenici said as he spoke in opposition to an amendment by Sen. Arlen Specter (R-Pa.) to boost NIH funding by $1.5 billion. “But they’ve turned into pigs. You know, pigs! They can’t keep their oinks closed. They send a Senator down there [to] argue as if they’re broke.”

After decades of disproportionate growth of the biomedical research sector, the debate after 2003 turned to restoring some balance among funding streams to the physical sciences, engineering, mathematics, and the social and behavioral sciences.

Despite its name, NIH’s mission has not generally been current health per se, but rather research for tomorrow’s health, and progress against intractable diseases through better understanding. An agency devoted to current health would do well to focus on tobacco control, exercise, nutrition, sanitation, and more cost-effective delivery of health care—prevention and efficiency, rather than research on diseases currently not treatable. NIH does these things—some programs such as the National High Blood Pressure Education Program and the National Cancer Institute’s ASSIST program have been signal successes in achieving health gains—but health and health care are not NIH’s main show. NIH is primarily about addressing diseases currently refractory to treatment, in hopes of changing that fact. And that is surely an appropriate government mission, since it is inherently long-term, the main output is information and knowledge, and the financial benefits are hard for private firms to appropriate. These are all features of public goods that only collective action and patient, public capital can supply.

It is a completely fair and open question, however, how much research should focus on basic biological mechanisms, how much on clinically promising interventions, how much on understanding and improving the way health care and preventive services are delivered, and how much on patient-centered outcomes research. It is also fair to ask whether the different elements of the biomedical innovation ecosystem are working well together—and if they are not, what good it would do to continue to favor the biological research approaches. Such questions are especially pertinent in light of the nation’s continually mediocre public health outcomes, and their stark contrast to the sophistication and productivity of the biomedical research enterprise.

One report after another, dating back to the Shannon Era in the 1950s, has tried to address how to achieve the right balance in the research portfolio. The truth is that there is no overarching theory of biomedical innovation sufficient to specify a “right” balance with any precision. At the macro level, Congress appropriates to institutes and centers that map to diseases, health missions, or health constituencies—factors that weigh in the political assessment of social value. At the micro level, most project funding decisions are made by merit review—usually peer review—as a fair way to assess scientific opportunity. The contending factions arguing before Congress help set the macro goals, expert scientists (sometimes augmented by disease-research advocates) make the project-by-project funding decisions, and overall system priorities and institutional architectures evolve to reconcile these different scales. This is a political process solution to a wicked problem with no reliable predictive theory. It is probably not optimal; but the question of what would work better has no agreed-upon answer.

Everyone is against cancer, but not everyone favors human embryo research, or all forms of it. Although the advance of biomedical research is a nearly universal goal, a significant fraction of the polity does not believe in Darwinian evolution, and yet almost everyone who does biology or practices medicine does. This clash of epistemologies carries political risk. As American politics has polarized, some aspects of biomedical research have broken along roughly partisan lines. Stem cell research sharply distinguished the Republican and Democrat platforms in the 2000 and 2004 presidential campaigns, for example, although the partisan differences amplified sometimes relatively small differences in the parties’ actual policy preferences. Embryonic cell research was an unusual intrusion of a biomedical research issue into presidential politics, but it exemplifies the risk. Partisanship over stem cell research did not spill over to affect the overall biomedical research budget, although it did affect the degree to which different administrations set constraints on embryo and stem cell research within the biomedical research budget.

The intensity of partisan discord was less prominent in the 2008 and 2012 presidential election cycles, and only time can tell if biomedical research becomes entangled in partisan bickering. If the intensity of partisanship further escalates, a partisan cleavage could emerge again, and it could affect support for biomedical research in general, not just specific research approaches.


Time for rethinking

Michael Crow, president of Arizona State University, wrote in Nature three years ago about how health research was unduly decoupled from health outcomes, and called for re-thinking how NIH and other components of biomedical research might more directly contribute to better health. I and others responded with concern. We were worried because in a blizzard, it is generally not good policy to shoot the lead dog. NIH is an effective agency, and it was no small feat to establish and sustain its excellence. But the “rethinking” part of Crow’s exhortation is well taken, and there are very large imbalances in the health research portfolio, with health services research and prevention the perennial stepchildren, and biomedical research the favored biological child.

If we turn explicit attention to fostering economic growth and to a focus on more tightly connecting research to its intended goal of improving health, then there is the possibility of not simply growing the crusty, large, and inertial system of health research but also more fully integrating it into the national economy as a matter of national policy.

The prospect is exciting but daunting. Current policies of regulating and paying for health goods and services reward introduction and overuse of expensive technologies that add incremental improvements in health, but with scant attention to cost or relative effectiveness. The trillion-dollar annual federal expenditures through Medicare, Medicaid, and other health programs (such as the Veterans Administration, military health programs, Indian Health Service, and federal employee health program) are not guided by a long-term strategy for improving health care. Instead, they have become open-ended entitlements with brainless purchasing policies. The Medicare statute, for example, explicitly denies authority to consider cost-effectiveness in medical practice, which sets up perverse incentives for cost-escalating innovation. Federal programs are not prudent buyers of the most effective health goods and services. The incentives favor expensive new drugs and devices that command high profit, and discourage low-cost innovation. To call this a “system” or a “market” is to stretch those words beyond coherence.

One obvious response to incoherence is better theory and more facts. It is, however, ultimately unsatisfying to merely call for more research on research. Some gaps are obvious: we need public funding to compare effectiveness of medical goods and services. Private firms’ interests will not drive the knowledge needed to make prudent purchasing decisions. Such information is a public good and the public will have to pay for it. The current laissez-faire approach merely invites perpetual cost escalation. More explicit attention to understanding the current “market” incentives, and to thinking through how to align such incentives for innovation with long-term cost-effectiveness, could contribute to a system that incrementally improves over time, based on evidence. And of course we need more research, both basic and clinical, on diseases we do not know how to control in hopes that someday we will be able to do something about them. In the end, however, how much to spend on research is a political choice, and it will be decided through our political processes.

The decade of stagnation in biomedical research may itself be turning into a political issue. Representatives Fred Upton (R-Michigan) and Diana DeGette (D-Colorado) of the Energy and Commerce Committee (which authorizes NIH activities in the House) are co-leading a bipartisan focus on “21st Century Cures.” This initiative seems to be a traditional bipartisan response, and carries on the congressional legacy of focusing on the conquest of disease. Both features are welcome, but the question is whether they can thrive in the generally poisonous atmosphere in the Capitol.

The importance of research as a component of economic growth is another shared value that can command bipartisan consensus. Elizabeth Popp Berman clearly traces how Creating the Market University grafted a new mission onto the traditional academic goals of creating and disseminating knowledge. Although research universities have had strong and productive ties with industry since the late 19th century, only more recently have they explicitly taken on the mantle of fostering economic growth as key components in a national system of innovation. This analytical framework ripened into national policies, particularly between 1980 and 2000. Recent reports such as Restoring the Foundation from the American Academy of Arts and Sciences, and the National Research Council’s Rising Above the Gathering Storm and its sequels build on this theme.

The kernel of truth in such reports is that universities and research clearly are important sources of ideas, information, and technologies that matter immensely in the innovation ecosystem. One difficulty with the framework, however, is that it relies on open-ended arguments that support increased funding but offer less guidance about how to make investments in economic growth more effective. No coherent theories predict how best to spend public dollars—or tell us how many dollars are enough. The reports are persuasive in documenting stagnation, and about the danger of under-investment and trends pointing to the emergence of R&D-driven economic policies in Asia that could overtake US pre-eminence in research and knowledge-based economic growth. They are, however, also unconvincing in articulating research-system designs that can meet the challenges of today’s world.

The open question for NIH is whether these arguments about economic growth, when combined with the attractive logic of boosting support for research to address the burden of diseases for which current public health and medical care are inadequate, will build political momentum to reverse a decade of neglect. Has NIH lost its halo, or will it begin to shine again?

Robert Cook-Deegan ([email protected]) is a research professor at the Sanford School of Public Policy, Duke University.

Forum – Winter 2015

Immigration reform

In “Streamlining the Visa Immigration Systems for Scientists and Engineers” (Issues, Fall 2014), Albert H. Teich articulates very well many of the arguments for making the U.S. visa system work better for visitors and the scientific enterprise, and then offers a sound plan of action to carry out this task. As the author points out, the country has benefitted tremendously in the past from its visitors and the immigration of foreign scientists. It is clearly in the national interest of the United States to keep a steady stream of foreign students and visitors coming to and working in this country.

A variety of intersecting factors adds urgency to dealing with this issue. First, as more and more countries invest in science and build their own science infrastructure, scientists around the world will have many more options for where they can conduct their research with first-class facilities and support. Unless the United States changes its visa system to lower the bureaucratic barriers to coming to this country, more of the top students and practicing scientists in other countries will simply choose to go to other places for high-quality training and facilities in which to do their work.

The likelihood of scientists going elsewhere is compounded by the overall reduction in research and development (R&D) funding in the United States that has occurred over the past decade. It has become much more difficult to have a productive research career in this country than it used to be. Funding for science, and then the likelihood of gaining research support, has decreased in the past decade, and this is occurring at the same time that funding in other countries is on the increase. Overall, U.S. R&D spending has fallen 16% in inflation-adjusted dollars from FY 2010 to the FY 2015 budget request. The federal government’s investment in science and technology now stands at roughly 0.78% of the economy, the lowest point in 50 years. Why would foreign scientists choose the United States when funding has become so constrained, and when it is both difficult and risky to try to settle there?

F14 cover small

U.S. visa policies also make it much more difficult for U.S. and foreign scientists to share ideas and to collaborate. As the scientific enterprise has become much more global in character over the past decades, multinational discussion forums and then actual collaborations are now the norm for most disciplines. As one data point, more than 50% of the papers published in Science include authors from more than one country. Importantly, in spite of the advent of effective electronic communication, face-to-face interactions still greatly benefit collaboration. The current prolonged and unreliable U.S. visa system makes it not only difficult, but extremely unattractive to even try to come to the United States for either purpose.

The issue of the U.S. visa and immigration system for science visitors, students, and researchers has been a discussion point for decades. It is in the United States’ interest to act now and make the system work much more reliably and efficiently. Let’s get to it!

Alan I. Leshner

Chief Executive Officer, American Association for the Advancement of Science

Executive Publisher, Science


Albert Teich does a masterful job of describing the human, political, and economic costs of the United States’ broken immigration and visa systems. He also reaffirms recommendations that have been advanced previously by NAFSA: Association of International Educators and other groups familiar with the nation’s schizophrenic immigration and visa systems. For many decades, the United States has derived many educational, economic, and social benefits from the mobility of global academic talent and immigrant entrepreneurs. NAFSA’s annual economic analysis shows that during the 2012-13 academic year, the presence of international students and their families has supported 313,000 jobs and contributed $24 billion to the U.S. economy. This means that for every seven international students enrolled, three U.S. jobs were created or supported. New data, disaggregated by state and congressional district, will be released during International Education Week and can be accessed at www.nafsa.org/econvalue.

In addition to the economic benefits of international education, the foreign- policy contributions of international students and scholars around the world should never be underestimated. U.S. policymakers have taken for granted this rich human and political capital: they erroneously assume that the best and brightest students, researchers, and entrepreneurs will continue to embrace the United States as the destination of choice for study, research, and business. Lost in the politicization of the issue is the fact that many countries recognize the value of international education and are upgrading their immigration policies to facilitate student and scholar mobility with the goal of attracting and retaining this global talent.

Indeed, the United States is not the sole country of “pull” for global talent. As the number of internationally mobile students has doubled, the U.S. share of this group has decreased by 10%. In making the decision to study or conduct research, students and scholars take into account the immigration policies of destination countries. The combination of an outdated U.S. immigration law (written in 1952) with post-9/11 regulations has had a chilling effect on this country’s ability to attract and retain much-needed human capital. Immigration laws have not kept pace with the emergence of new global economies. The failure of the immigration system poses a real threat to U.S. global economic competitiveness.

The United States cannot afford to lose in the global competition for talent. To remain competitive, it must remove unnecessary barriers and pass comprehensive immigration fit for a 21st century world. In doing so, the United States will send a strong message to international students, researchers, and entrepreneurs that it is a welcoming nation. Since it is impossible to accurately determine the sectors that will innovate and their demands for human capital in the United States, we need to consider comprehensive immigration reform for all international students and scholars, not just those in science and engineering.

The United States must remain true to its values and resist the politics of fear that undermine its economic competitiveness as well as weaken public-cultural diplomacy efforts.

Fanta Aw

President and Chair of the Board of Directors

NAFSA: Association of International Educators


Albert Teich’s thorough and thoughtful article lays out the very real obstacles that still remain for foreign scientists and engineers who wish to study and work in the United States. I say “still” because the U.S. government has, in fact, tried very hard to fix the many mistakes that were made in the aftermath of the 9/11 terrorist attacks—mistakes that led, as Teich notes, to serious problems for foreign students and visiting scientists and disrupted significant scientific research. In recent years, the State Department has made it a priority to process foreign student visa applications in a timely fashion; has significantly reduced the long wait times in embassies in places such as China, India, and Brazil; and has streamlined the security review process. Those efforts have produced results. The United States issued more than 9 million non-immigrant visas last year, a near doubling since 2003, and a record 820,000 international students are now studying in the United States.

The problems that remain are largely a result of two things: the inability of Congress to reform outdated U.S. immigration laws, and the tendency of any large government organization to treat its clients with a certain disregard. When those clients are top students and scientists who are increasingly sought out by many countries, the loss to the United States can be significant. One recent example of that disregard: the latest Office of Inspector General (OIG) report on the State Department’s Bureau of Consular Affairs, which runs U.S. visa operations around the world, stated flatly that the U.S. government “does not respond adequately to public inquiries about the status of visa cases.” If you’re a foreign scientist waiting in frustration for a visa to attend a scientific conference in Boston, for example, you have virtually no chance of learning when the U.S. government might make its decision. The OIG team discovered a queue of 50,000 emails awaiting a response, and when the team twice tried to call the service help line, the callers never reached a live human being.

Teich offers a sensible list of fixes to makes the visa process more friendly for foreign students and scientists. Unfortunately, President Obama’s recent executive action on immigration, which asserts an expansive interpretation of executive authority to help undocumented immigrants, does too little for scientists and engineers coming to the United States through the proper legal channels. It promises some helpful changes to after-graduation work rules for foreign science, technology, engineering, and mathematics students, and opens new doors for immigrant entrepreneurs. But several of Teich’s recommendations are similarly administrative fixes that could be implemented without congressional action, yet they were not part of the president’s package. This was an unfortunate missed opportunity. If Congress continues to block more comprehensive immigration reform, the administration would be well advised to take a careful look at these proposals and include them in another round of executive-led reforms.

Edward Alden

Senior Fellow

Council on Foreign Relations


What education can’t do

In “21st Century Inequality: The Declining Significance of Discrimination” (Issues, Fall 2014), Roland Fryer seems to believe that he has disproved the necessity for “more education for teachers, increased funding, and smaller class size.” These are not solutions, he says, but the conventional wisdom that we have tried for decades without success. He offers as examples of success the charter schools of the Harlem Children’s Zone and his own work in Houston, which involves longer hours in schools and intensive tutoring by low-wage tutors.

I found this a contradictory assertion, because the charter schools of the Harlem Children’s Zone spend substantially more than the neighborhood public schools. One of the features of these two schools is small classes. In addition, they offer wraparound services, including one-on-one tutoring, after-school programs, medical and dental care, and access to social workers. According to a report on the Harlem Children’s Zone in the October 12, 2010, New York Times, “the average class size is under 15, generally with two licensed teachers in every room.” We can only wonder how well the neighborhood public schools would do with similar resources.

Other scholars have questioned Fryer’s contention that school reform can be obtained with minimal additional costs. Bruce Baker of Rutgers University wrote in a January 26, 2012, blog post called “School of Finance” that each of Fryer’s studies “suffers from poorly documented and often ill-conceived comparisons of costs and/or marginal expenditures.” Baker briefly reviewed these studies and concluded: “setting aside the exceptionally poor documentation behind any of the marginal expenditure or cost estimates provided in each and every one of these studies, throughout his various attempts to downplay the importance of financial resources for improving student outcomes, Roland Fryer and colleagues have made a compelling case for spending between 20 and 60% more on public schooling in poor urban contexts, including New York City and Houston, TX.”

I am persuaded that Geoffrey Canada, the CEO of Harlem Children’s Zone, has a good model. It costs far more than our society is willing to pay, except in experimental situations. Children growing up in poverty need medical services, small classes, and extensive support services for themselves and their families. This is not cheap. But it is not enough.

Society has a far larger problem. Why is it that the United States has a larger proportion of children growing up in poverty than any other advanced nation? Why isn’t the federal government planning a massive infrastructure redevelopment program, as Bob Herbert proposes in his brilliant new book, Losing Our Way, which would lift millions of families out of poverty while rebuilding the nation’s crumbling bridges, tunnels, sewer lines, gas lines, levees, and other essential physical aspects? Expecting school programs to solve the extensive and deep problem of poverty, without massive federal intervention to create jobs and reduce poverty, is nonsensical.

Diane Ravitch

New York, NY


DOD’s role in energy innovation

Eugene Ghotz, a leading scholar of the innovation system within the Department of Defense (DOD), presents a cautionary tale in “Military Innovation and Prospects for Defense-Led Energy Innovation” (Issues, Fall 2014).

When cap-and-trade legislation to impose a price on carbon emissions failed to pass the U.S. Senate in 2010, a 15-year-old assumption about how the United States was going to transition to a lower carbon economy went down with it. Cap and trade had been the almost exclusive policy focus of the climate change community ever since such an approach for acid rain was first passed, and then successfully implemented as part of the Clean Air Act Amendments of 1990. When cap and trade for carbon dioxide failed in the Senate, there was a policy vacuum—no substitute approach was readily at hand or thought-through.

One of the problems with cap and trade was that it was a pricing strategy, not a technology strategy, and it was hard to adopt a pricing strategy without more progress on a technology strategy. Although pricing can sometimes force technology, it assumes a degree of technology readiness that was still missing in a number of key energy innovation sectors. So if the pricing strategy was on political hold, why not pursue a technology-push strategy, which was needed anyway? And why not enlist the DOD innovation system, which, after all, played a critical role in most of the technology revolutions of the 20th century—aviation, nuclear power, space, computing, and the Internet? Unlike the Department of Energy, which can take a technology from research to development and perhaps to prototype and early-stage demonstration, DOD operates at all of the implementation stages, funding research, development, prototype, demonstration, testbed, and often initial market creation and initial production. Why not enlist this connected innovation system in the cause of energy technology?

Ghotz points out that the military, particularly in an era of budget cutbacks, will focus only on their system of critical defense priorities vital to warfighters. To ask the military to go outside their mission space, he demonstrates, will produce much friction in the system. It simply won’t work; it’s hard enough for DOD to deliver technology advances for its core missions without taking on external causes, he illustrates. So DOD, for example, is not going to develop carbon capture and sequestration technology—that’s not its problem. And it is not going to develop or support massive energy technology procurement programs.

But, realistically, is there is a range of energy technology challenges within its reach? Ghotz does a service by pointing toward that track. DOD does face tactical as well as strategic problems because of energy. Two Middle East wars made clear the vulnerability of its massive fuel supply lines and forced it into defending fixed points, jeopardizing its mobility and exposing its forces to relentless losses. The department needs to restore the operational flexibility of its mobile forces, and solar and storage technologies are important in this context. Recent events in the Middle East suggest that the United States will not walk away from this theater anytime soon. For forces laden with the electronics of network-centric warfare, long-lasting, lightweight batteries are critical. These are two examples of the role that DOD can pursue: certain critical niche technologies, modest initial niche market creation, and the application of its strong testbed capabilities. And DOD is doing exactly this, filling some important gaps in the energy innovation system.

There is another area where DOD can play a role. As the nation’s largest owner of buildings, it needs to improve the efficiency and cut the cost of its facilities. Its bases are also exposed to the insecurity of the grid, so it has a strong interest in off-grid technologies, including renewables and perhaps even small modular reactors. Where it cannot get off the grid, it has a major interest in grid security and efficiency. All this turns out to be an important menu of operational and facility energy technologies with some important dual-use opportunities. That’s why the Advanced Research Projects Agency-Energy (ARPA-E) and the Office of Energy Efficiency & Renewable Energy (EERE) at the Department of Energy are collaborating with DOD.

Ghotz brings us a splash of realism about DOD’s role. But some vital energy opportunities remain if, and only if, they fit the DOD mission.

William B. Bonvillian

Director of Government Relations

Massachusetts Institute of Technology


Inspired design

The work of Arizona State University students on PHX 2050, described by Rider W. Foley, Darren Petrucci, and Arnim Wiek in “Imagining the Future City” (Issues, Fall 2014), is the perfect embodiment of the Albert Einstein quote, “We can’t solve problems by using the same kind of thinking we used when we created them.” Indeed, the article provides provocative thinking, but it is the video cited that offers the real substance. For those who didn’t follow the link and are interested in the urban design aspects of the project, visit http://vimeo.com/88092568.

As a practicing professional in architecture and urban design, I believe that there are some issues that need more discussion as implementation of the project’s concepts are considered. The first is equity. The project does touch on the divide between the haves and have-nots. But as this is already a societal problem, it should not be propagated into the future—especially with technology becoming a segregating device. From an urban design perspective, think about the effects of the High Line in New York City for a moment. Although the park is a terrific amenity for the city, and surrounding real estate prices have increased, the ground-level issues of marginalized and shady streets still persist. The economics of technology will also need to be considered at the varied design scales: rural, suburban, and urban. Infrastructure investment is inevitably easier to justify in urban settings as the population served will be higher. However, is there greater opportunity to also incorporate solutions to sprawl retrofit rather than adding additional services to an already well-served urban population?

The concept of “placemaking” should be carefully folded into all design details. Walkability has been discussed but should be moved to the “public” street rather than the alley. Having eyes on the street instead of technology in the front yards would increase a perception of safety as well as provide more visual interest for pedestrians and cyclists. However, the use of canopies over rear alleys or mid-block service areas for rainwater collection and solar energy generation should definitely be explored further. The occupants of upper floors would never have to see parked cars, but would there be greater heat island effects due to reflectivity that may hamper green infrastructure?

The last, but should probably be the first, issue to consider is humanity. Humans will never be tidy machines that all serve the greater good that a fully technological society would need. People strive to be unique, and nowhere is this clearer than in the United States. Our culture of individual property rights is a hurdle to true, full collaboration, especially where public funding alone cannot pick up the tab. As the proposal acknowledges, public-private partnerships will need to be considered in greater depth and for more infrastructure than is currently the case. Creativity can be chaotic and change is difficult, so how do cultures adapt and what could be the method, beyond education, by which the change happens more quickly?

In summary, I’d like to share a quote from Donna Harris, the entrepreneur who started 1776, a Washington, DC-based incubator: “Our educational system has not historically been set up to teach the kinds of skills that make someone entrepreneurial—in fact, the opposite is true. We learn to follow directions, not to question the directions. But that’s exactly what you have to do if you are taking an entrepreneurial approach. You have to look at things and question them, be confident enough to assume that maybe you might have a better way. But we often punish people who think this way. I think it’s actually one of our biggest challenges as a nation as we think about the future global economy.”

By starting the debates in academia, design thinking can be encouraged throughout society. Just as we start to understand the new economy of reduced public funding, these conversations about systemic change are critical. Please keep up the good work.

Sarah A. Lewis

Associate, Urban Planning, Community Development

Fuss & O’Neill Inc.


Productive retirement

As a faculty colleague of Alan Porter at Georgia Tech, I was interested to read his article, “Retire to Boost Research Productivity” (Issues, Fall 2014), in which he provides an “N=1 Case Study” of how his research productivity has increased significantly since he retired in December 2001.

This case study is presented to address an important issue for research universities: with faculty members 60 years of age or older holding onto their positions, “shielded” by the lack of an age for mandatory retirement, younger people may be “kept off the academic ladder.” Porter uses his own “retirement career” to ask whether there might be “win-win” semi-retirement options that would free up opportunities for the recruitment of young faculty while at the same time enabling senior faculty to remain productive and engaged. His personal case study demonstrates one way to do this, focusing on his research and the enhanced publication rate he has had in his retirement years.

In my own case, I retired in 2010 and I am Institute Professor Emeritus in the School of Mechanical Engineering at Georgia Tech. After being retired for a month, I was appointed to a half-time position with half of my salary coming from my research grants. This of course means that half of my salary is being paid by institutional funds. My research productivity has continued at my pre-retirement level, and there is no doubt that availability of facilities, including office space and a research laboratory, as well as the infrastructure and administrative support provided to me were essential to my continued productivity.

In my “N=1 Case Study,” I have not only continued to be involved in research, but there are other ways in which I have been engaged and contributed. These include the mentoring of young faculty, assistance in the preparation of proposals, outreach to the community, and national leadership activities. Whereas in the context of research there are quantifiable outputs such as the number of publications and grant dollars, the value of non-research activities are perhaps not so readily assessed, even though most of us would consider these as value added.

The basic issue, then, is how does an institution create these win-win situations and appointments? Are these truly important to an institution in the 21st century, where there is no mandatory retirement age and where 60 is the new 50, and 80 may be the new 70? How does an institution evaluate the activities of a retired faculty member in attempting to achieve win-win situations? In my own case, even though a significant amount of my pre-retirement salary has been freed up and can be used, it would be hoped, to pay the salary of a young academic, because of my other activities that are beyond simply doing research, how does an institution evaluate me and justify the use of institutional funds to pay part of my salary? The answers to these questions obviously are important to me personally; however, these are questions that every institution should address.

Robert Nerem

Georgia Institute of Technology


Casting light on fracking

In “Exposing Fracking to Sunlight” (Issues, Fall 2014), Andrew A. Rosenberg, Pallavi Phartiyal, Gretchen Goldman, and Lewis Branscomb note the rapid rise of unconventional oil and gas production in the United States, but not what sparked the innovations needed to develop these previously inaccessible reserves.

In the past decade, while U.S. shale gas production grew 10-fold, conventional natural gas production dropped 37%. Conventionals accounted for 16% of the nation’s natural gas production in 2012; by 2040, that share will shrink to 4%. This won’t be by choice. Conventional reserves are shrinking; in short, we’ve recovered all the easy stuff. Future fossil fuel extraction will take us deeper underground and below the ocean floor, to more remote corners of the globe, and into less permeable formations.

Whereas the focus of the “fracking debate” has centered on what’s different about unconventional production, the bigger story may be how little techniques have changed in these new, tougher extraction environments. Despite advances in directional drilling and cement chemistry, as well as impressive developments in other pertinent areas, the basic steps for well construction and production are much as they were decades ago. When applied to unconventional development, these steps demand more energy and industrial inputs. Researchers at Argonne National Laboratory have found that Marcellus shale gas wells require three times more steel, twice as much cement, and up to 47 times more water than a conventional natural gas well. The greater scale and intensity of unconventional development may be the key driver of risk to public health, the environment, and community character.

The authors are exactly right that the way to identify and respond to this risk is through data collection, scientific research, and public disclosure. The question is how to advance in this effort. The situation is somewhat more complex than the article implies, and thus it may be more hopeful than warranted for several reasons.

First, the article posits that “concerted actions by industry severely limit regulation and disclosure.” However, this sector is incredibly diverse, comprised of hundreds if not thousands of companies ranging from mom-and-pop shops to Fortune 500 companies. The industry can’t even agree on a single trade group to represent its interests. The multiplicity of diverse actors poses a serious governance challenge but also affords an opportunity to find support for risk-based regulation. Companies may find that a greener position on regulation could win them social license, price premiums, or contracts with distribution companies sensitive to consumer environmental concerns.

Second, the article advocates federal regulation of unconventional oil and gas production. Under current law, federal agencies could regulate more aspects and outcomes of this activity. (Despite the exemptions noted, federal authority exists or could be triggered by agency action in each environmental statute listed.) However, in the past five years we’ve seen a more robust regulatory response from states. State agencies house much of the nation’s oil and gas regulatory expertise, and at least in some cases they boast strong sunshine and public participation laws (while sometimes exempting oil and gas).

Federal regulation is not a yes or no question. It can be used to lead, nudge, complement, or supplant state action, depending on the issue and the context. In data collection and research, federal agencies could set harmonized data collection standards, compile and share risk data, and fund research to change how we extract unconventionals and how we reduce our dependence on these fossil fuels.

Kate Konschnik

Director, Environmental Policy Initiative

Harvard Law School


Grand challenge for engineers

The National Academy of Engineering’s Grand Challenges for Engineering posits a list of far-reaching technical problems that, if solved, will have a momentous impact on humanity’s future prosperity. In “The True Grand Challenge for Engineering: Self-Knowledge” (Issues, Fall 2014), Carl Mitcham proposes an additional challenge of educating engineers capable not only of attacking the technical challenges, but also of tackling the questions presupposed by the list: What does a prosperous human future entail? What kind of world should we strive for? What role should the engineer play in achieving such ends?

Mitcham argues that engineers need to learn to think critically about what it means to be human and calls for engineering education to embrace the humanities for their intrinsic value (rather than as a service provider for communications skills). So how grand a challenge is the author’s proposal? I believe there is good reason for pessimism, but also for optimism.

I’m pessimistic when I take a high-level view. Much has been written about the contemporary trend in higher education toward commoditization, with its economically instrumental view of academic programs, and even the specter of institutions outsourcing the humanities to online providers. None of that augurs well for a more reflective education for anyone, much less engineers. As for engineering, radically reformulating engineering education in any overarching way has proved difficult. For example, some years ago, the American Society of Civil Engineers gamely advocated for a master’s degree as the first professional degree, in part to produce “more broadly trained engineers with an education that more closely parallels the liberal arts experience.” The society subsequently softened its stance due to inertia in the system, and a mandated liberal arts-like experience for engineers has certainly not materialized.

Yet, I’m optimistic when I take a grassroots view. Consider this recent Forbes headline: “Millennials Work for Purpose, Not Paycheck.” Seemingly against the instrumental trajectory of higher education, the current college generation appears to place a premium on meaningful work that contributes to the well-being of global society, suggesting a potential market for the type of education Mitcham champions. And if the educational system isn’t responsive to that demand from the top down, perhaps it can be from the bottom up. For example, Mitcham mentions humanitarian engineering programs, which his institution helped pioneer and which are increasingly popping up at schools across the United States, including my own.

Similarly, new programs in sustainable engineering or sustainable development engineering have recently arisen on many campuses. These types of programs didn’t exist just a few years ago. They have developed organically, rather than in response to any broad policy, and they tend to value engineers learning about the human condition. Another recent phenomenon has been the rise of 3-2 duel engineering programs involving liberal arts colleges, with students earning both B.A. and B.S. degrees. Granted, such paths still represent a small slice of the engineering education pie, but I’m hopeful they will grow and spread, perhaps nucleating Mitcham’s desired change from the inside out.

Byron Newberry

Professor of Mechanical Engineering

Baylor University


There is reason to believe that Carl Mitcham’s goal can be achieved. With the adoption by ABET (a nonprofit, nongovernmental organization that accredits college and university programs in the disciplines of applied science, computing, engineering, and engineering technology) of Engineering Criteria 2000, engineers are expected to develop personal and professional responsibility and understand the broader effects of engineering projects, which provides a solid departure point for seeking “self-knowledge.” And although several emerging obstacles may prevent the chasm between the two cultures of the humanities and engineering from being easily bridged, they may also reveal creative opportunities.

The first obstacle is fragmentation of the university. Institutional separation of colleges and departments, necessary for many reasons, is made materially manifest in the creation of science and research parks formed in collaboration with commercial entities. Given the steep decline in public funding, private funding for research may seem like pure good fortune. Yet creation of such parks may introduce physical barriers that can prevent interdisciplinary work and collegiality among faculty and students in engineering and those in the humanities. Moreover, the proprietary nature of much research done in such collaborations is contrary to the goal of democratizing knowledge, an important justification for the public funding universities still receive.

The second obstacle is the exponential growth of technical knowledge that must be mastered to do engineering work. The “Raise the Bar” initiative, supported by the National Society of Professional Engineers and the National Council of Examiners for Engineering and Surveying, has responded to the increased demands on engineers by changing professional licensure to require either a master’s degree or equivalent in the near future. Andrew W. Herrmann, past president of the American Society of Civil Engineers, characterized the changes as similar to what other “learned professions” had done to cope with increasing demands on their members and as a move that would raise the stature of the engineering profession.

Although an initial response may be to assign additional educational requirements to technical courses, more innovative departments should consider repositioning an engineering education to generate as many opportunities as possible for its students to interact with the humanities and social sciences. To do this will require financial support for engineering students who are interested in earning minors (or even second majors) in those areas, perhaps by devoting a small share of the resources dedicated to collaborative private/public research projects to this end. Such support may attract interest from underrepresented groups by showing that engineering education means development of the whole person, not just their technical skills. It would also provide tangible proof to the public that its financial support is more than subsidized job training for favored industries, while also demonstrating to ABET that an engineering department is committed to excellence for all learning outcomes, not just those related to engineering sciences.

Repositioning engineering education should also provide an opportunity for engineering departments to do their part in bridging the two-culture divide by promoting minors in engineering disciplines to humanities and social sciences majors. In a world in which technology is ubiquitous, increasing the quality and quantity of public knowledge about engineering should increase the quality of public discourse on technological projects.

Glen Miller

Department of Philosophy

Texas A&M University


I applaud Carl Mitcham’s call to recognize engineering education as one of the Grand Challenges for engineering in the 21st century. Engineers will continue to play a pivotal role in solving the enormous problems facing the world, but the education at most engineering schools is not preparing their students for the sociotechnical complexity or the global scale of the problems. The narrowness of engineering education has long been recognized, and although a few institutions have made serious efforts to change, engineering education remains narrow. The curriculum provides few opportunities for students to develop substantive nontechnical perspectives; few opportunities to see engineering in the broad social and political context in which it operates and has consequences; and few opportunities to develop the personal attributes and understanding that might lead to more socially responsive and responsible solutions.

Engineers are, in Mitcham’s words, “the unacknowledged legislators of the world” insofar as they create technologies that order and regulate how we live. Of course, engineers are not alone in doing this. The organizations that employ them, regulatory agencies, markets, and media all have a role. If engineers are to play an effective role, they must understand their relationships with these other actors and they must understand the broader context of their work (not just the workplace). In short, they must understand engineering as a sociotechnical enterprise.

Engineering education is appropriately a Grand Challenge because it is not a small or easy problem. A dose of humanities—a few required humanities and social science courses—won’t do the job. In part, this is because many of the humanities and social sciences don’t address the technological character of the world we live in. They may allow students to consider the meaning of life, but without acknowledging the powerful role technology plays shaping our lives. So the Grand Challenge involves changing humanities and social science education as well as engineering education.

The Grand Challenge has another component that is rarely recognized. Understanding how technology and society are intertwined is not just important for engineers. Non-engineers need to understand how technology regulates everyone’s lives. Thus, part of the challenge of engineering education is to figure out what citizens need to know about technology and engineering. Again, it is not a small or easy problem. Citizens can’t become experts in engineering, so we need to figure out what kinds of information and skills they do need. Most colleges and universities require liberal arts students simply to take a certain number of science courses. This is woefully inadequate to prepare students for living in this science- and technology-dependent world.

In my own experience, bringing insights, theories, and concepts from the field of science, technology, and society studies has been enormously helpful in engaging engineering students in thinking more broadly about the implications of their work and seeing ways to design things that solve broader problems. For example, focusing on how Facebook and Google algorithms determine the information that users see, and the significance of this for democracy, may change the way engineering students think about writing computer code. Similarly, focusing on the politics of decisions about where to site bridges frames engineering as implicitly a sociotechnical enterprise. Notice that this approach might work as well for liberal arts students. Indeed, it might stimulate them to enroll in science and engineering fields.

Deborah G. Johnson

Anne Shirley Carter Olsson Professor of Applied Ethics Science

University of Virginia


Carl Mitcham proposes that because engineering fundamentally transforms the human condition, engineering schools have a duty to educate students who will be able to think reflectively and critically on the transformed world that they will help create. What should students learn and then reflect on as they move through their professional careers? Mitcham refers to the National Academy of Engineering’s Greatest Engineering Achievements of the 20th Century and Grand Challenges for Engineering as being insufficient in how they critically explore the achievements and challenges that have or will transform the world. Perhaps the National Academies should develop a follow-on project, Engineering: Transforming the Human Condition and Civilization.

The project could serve as source for curriculum across engineering education as well as for other fields and for continuing education. The overarching theme would be not only the triumphs, but also the tragedies in the transformation of civilization from the hunter-gather societies symbolized in cave paintings of over 30,000 years ago, to agrarian societies, to industrialization, and now to a techno-info-scientific society.

The challenge is to organize our knowledge so that the big picture—the fantastic story of human civilization; who we are and what we are becoming as beings on this watery planet—is coherent and accessible. One strategy would be to organize the knowledge as the evolution of technological systems and the increasing interactions of such systems. One thread through time is the nexus of food, water, and energy. One can learn how these systems changed over time, including the connections with transportation, materials, and the built environment, for example. From the moldboard plow pulled with horses planting open-pollinated crops to autonomous self-driving tractors and genetically engineered crops that are robotically harvested, how is one system better than the other—or is it? Then there is the issue of our increasing reliance on space systems for weather and climate information, and perhaps for attempting to engineer the climate in a way we desire.

These systems are not just technical, but sociotechnical, reflecting the interests, values, costs and benefits, winners and losers in the distribution of benefits and costs, the power to influence what happens, and the adjudication in some cases of what systems become realized in the world. It is messy. These are the details that matter and influence the evolution of sociotechnical systems and who we become.

Darryl Farber

Assistant professor of Science, Technology, and Society

Penn State University


Addressing the Grand Challenge formulated by Carl Mitcham, when done well, could lead to revolutionary changes in the way society innovates. But who will initiate and execute self-reflection among engineers? Within universities, three groups can be identified: the administration, technical faculty, and liberal arts faculty. Change is most effective when it is driven both top-down and bottom-up, which means the involvement of administration and faculty.

But in reality, the administration is often loath to take on this role, in part because of financial reasons. Technical faculty are often wrapped up in their research and teaching, and as a result may not pay much attention to the broader impact of their work. That leaves the liberal arts faculty. But since at technical universities this group is often seen as providers of service courses, they alone may not have the clout to realize institution-wide change. So again the question: who will be the agent of change?

What is needed is a movement among faculty, students, and, preferably, individuals in the administration. This movement will be most effective when it includes technical faculty who are seen as role models. Inclusion of liberal arts faculty is essential because of their societal insight and critical thinking skills. Because of their complementary expertise, technical faculty and liberal arts faculty may need to educate each other. Faculty organizations, such as a faculty senate, research council, research centers, or individual departments, could play a key role. Other initiatives, such as reading groups, high-profile speakers, and thought-provoking contributions to campus publications, may also contribute.

Funding agencies also have an opportunity to be agents of change. The National Science Foundation (NSF), for example, requires that the students and postdoctoral fellows it funds receive ethics training. Requiring that grant applicants address the Grand Challenge outlined by Mitcham would naturally fit under the Broader Impact criterion used by the NSF.

So members of the campus communities, stand up—and in the words of Gandhi, “be the change you want to see in the world!”

Roel Snieder

W.M. Keck Distinguished Professor of Basic Exploration Science

Colorado School of Mines


I cannot but wholeheartedly subscribe to Carl Mitcham’s wake-up call to all of us, but to engineers in particular, to face the “challenge of thinking about what we are doing as we turn the world into an artifact and the appropriate limitations of this engineering power.” Critical thinking is the pivotal notion of his wake-up call. But what are the tools of critical thinking, and where are engineers to turn for support in developing and applying these tools? Mitcham advises engineers to turn to the humanities.

But are the humanities up to this task? What kinds of tools for critical thinking have they to offer, and are they appropriate for the problems we are facing in our technological age? Take philosophy. In the 20th century, philosophy has developed into a discipline of its own, with philosophers writing mainly for philosophers. There is no shortage of critical thinking going on in philosophy, but is it the kind of critical thinking that engineers need? I have serious doubts, given that reflection on science and technology plays only a marginal role in philosophy.

What is true of philosophy is also true, I fear, for many of the other humanities. Here lies a grand challenge for the humanities: to turn their analytical and critical powers to the single most characteristic feature of the modern human condition, technology, and to engage in a fruitful dialogue with engineers, who play a crucial role in developing this technology. If they face up to this challenge, they may be the appropriate place for engineers to turn for guidance in dealing with their quest for self-knowledge.

Peter Kroes

Professor of Philosophy of Technology

Delft University of Technology

The Netherlands



The Singing and the Silence: Birds in Contemporary Art

Bieber_Bird-Chest

Lorna Bieber, Bird/Chest, Silver print, 2000–2001. Artwork and images courtesy of the artist. © Lorna Bieber.

Birds have long been a source of mystery and awe. Today, a growing desire to meaningfully connect with the natural world has fostered a resurgence of popular interest in the winged creatures that surround us. The Singing and the Silence: Birds in Contemporary Art examines humanity’s relationship to birds and the natural world through the eyes of twelve major contemporary U.S. artists, including David Beck, Rachel Berwick, Lorna Bieber, Barbara Bosworth, Joann Brennan, Petah Coyne, Walton Ford, Paula McCartney, James Prosek, Laurel Roth Hope, Fred Tomaselli, and Tom Uttech.

The exhibition, on view at the Smithsonian American Art Museum, Washington, D.C., from October 31, 2014, through February 22, 2015, coincides with two significant environmental anniversaries—the extinction of the passenger pigeon in 1914 and the establishment of the Wilderness Act in 1964—events that highlight mankind’s journey from conquest of the land to its conservation. Although human activity has affected many species, birds in particular embody these competing impulses. Inspired by the confluence of these events, the exhibition explores how artists working today use avian imagery to meaningfully connect with the natural world, among other themes.

Whereas artists have historically created images of birds for the purposes of scientific inquiry, taxonomy, or spiritual symbolism, the artists featured in The Singing and the Silence instead share a common interest in birds as allegories for our own earthbound existence. The 46 artworks on display consider themes such as contemporary culture’s evolving relationship with the natural world, the steady rise in environmental consciousness, and the rituals of birding. The exhibition’s title is drawn from the poem “The Bird at Dawn” by Harold Monro.

The exhibition is organized by Joanna Marsh, the James Dicke Curator of Contemporary Art.

—Adapted from the exhibit website


Tomaselli_MigrantFruitThugs.tif

Fred Tomaselli, Migrant Fruit Thugs, Leaves, photo collage, gouache, acrylic and resin on wood panel, 2006 Image courtesy of Glenstone. © Fred Tomaselli.

Brennan_PeregrinFalcon.tif

Joann Brennan, Peregrine Falcon. Denver Museum of Nature and Science, Zoology Department (over 900 specimens in the collection), Denver, Colorado, Chromogenic print, 2006. Artwork and image courtesy of the artist, Denver, Colorado. © 2006, Joann Brennan.



Ford_FallingBough.tif

Walton Ford, Falling Bough, Watercolor, gouache, pencil and ink on paper, 2002. Image courtesy of the artist and Paul Kasmin Gallery.


University Proof of Concept Centers: Empowering Faculty to Capitalize on Their Research

In March 2011, President Barack Obama announced the creation of a Proof of Concept Center (PoCC) program as part of the i6 Green Challenge to promote clean energy innovation and economic growth, an integral piece of his Startup America initiative. Managed through the Economic Development Administration (EDA), the program encouraged the creation of PoCCs aimed at accelerating the development of green technologies to increase the nation’s competitiveness and hasten its economic recovery. In September 2011, EDA awarded $12 million to six university-affiliated organizations in response to the Challenge competition; and in 2012, EDA awarded $1 million to each of seven new PoCCs. The 2014 solicitation broadened the i6 Challenge to include awards up to $500,000 for growing existing centers or developing commercialization centers to focus on later-stage research. The program raises an important question: What’s a PoCC and how is it different from other efforts to stimulate innovation?

PoCCs are designed to help address the particularly troublesome gap between the invention of a specific technology and its further development into new products or applications. The problem is that in most cases neither the faculty researcher who makes a discovery nor the university itself has the information needed to understand its value to outsiders or the contacts and incentives necessary to develop it. In the jargon of economics, there are informational, motivational, and institutional asymmetries.

Public funding of PoCCs represents a new approach to technology development. Whereas the Small Business Innovation Research and Small Business Technology Transfer programs administered through the Small Business Administration provide support to small organizations to develop focused research with a goal of commercialization, PoCCs support university faculty and students who typically lack the networks and experience necessary to understand more fundamental aspects of technology development and entrepreneurship.

Many people became aware of PoCCs only with the current federal initiative, but the first PoCCs were established more than 10 years ago and were part of a broader trend emphasizing the development, transfer, and commercialization of university technologies.

For years, university reputations hinged on the capability of faculty to obtain sponsored grants (typically from the federal government), conduct research, and publish results that contribute to the broader body of knowledge. This process, however, can also yield new inventions or discoveries that may be useful for social or economic purposes beyond fundamental science. Aside from a few universities, such as the University of Wisconsin at Madison (technology transfer office founded in 1925) and the Massachusetts Institute of Technology (technology transfer office founded in 1950), prior to the 1980s, research institutions either ignored these discoveries or did not have the means to explore their value, much less to develop them into promising new technologies or companies.

This environment began to change in the late 1970s as the United States confronted a severe downturn in industrial productivity, accompanied by bankruptcies, layoffs, and plummeting world market shares for U.S. firms. U.S. economists and policymakers concurrently observed the stunning success of the Japanese keiretsu: an industrial alliance through which large manufacturers, suppliers, and public institutions collaboratively developed and produced high-quality products for export. National leaders in the United States consequently sought to improve federal policies relating to industrial performance by scaling back burdensome federal regulations, removing barriers to industrial collaboration and improving the return on investment for federally-funded university research.

First steps

Policymakers were specifically concerned that valuable technologies were either sitting on the shelf within universities or mired in red tape within federal mission agencies. Senator Birch Bayh (D-IN), was especially interested in ways to disseminate and accelerate the development of new biomedical technologies derived from federally-funded research. Bayh wrote legislation to stimulate innovation, which he promoted in a letter to his Senate colleagues: “Many people have been condemned to needless suffering because of the refusal of agencies to allow universities and small businesses sufficient rights to bring new drugs and medical instrumentation to the marketplace. The exact magnitude of this situation is unknown, but we are certain that the cases we have uncovered to date are but a small sample of the total damage that has been done and will continue to be done if Congress does not act.”

Bayh joined Senator Robert Dole (R-KS) to propose and pass the University and Small Business Patent Procurement Act of 1980 to improve the introduction of new, university-developed technology into the private sector. Referred to by the Economist as possibly “the most inspired piece of legislation to be enacted in America over the past half-century,” the so-called Bayh-Dole Act did this by aligning technology transfer policies among mission agencies to give universities title to intellectual property stemming from federally-funded research and development.

The immediate impact fell short of expectations. Not only were universities slow to understand the implications of Bayh-Dole, but with some exceptions, high-technology companies rarely viewed universities as a source for useful technologies. This perception on the part of industry began to change, however, with the emergence of a few highly-publicized licensing deals, such as the wildly lucrative Axel patents, the first of which was assigned to Columbia University in 1983. These patents sought to protect a method developed by Richard Axel for introducing foreign DNA into cells. Not only did the patents earn Columbia nearly $790 million in licensing revenue, much of this windfall was put back into Axel’s research, eventually leading to a Nobel Prize in 2004.

Gradually, understanding the financial potential of technology licensing and commercialization, an increasing number of universities responded by establishing technology transfer offices to manage the legislatively mandated invention disclosure process and determine whether to file for intellectual property protection. In fact, between 1980 and 2013, nearly 150 new technology transfer offices were established at U.S. universities. Further, universities and regions created their own attendant commercialization infrastructure, including science parks, entrepreneurs-in-residence, and early-stage seed funds, to encourage and support technology transfer and commercialization outcomes.

By all accounts, Bayh-Dole has been a resounding success. University disclosures, licensing deals, and spinoff companies—blunt but commonly used metrics of technology transfer activity—have grown consistently over the past 25 years. Well-known technology companies, such as Lycos, Yahoo, Amgen, and Google can trace their lineage back to university research. And each year, the Association for University Technology Managers (AUTM) publishes a list of the most important technologies that have been licensed from universities that year.

But there are concerns, too. Although growth in the number of disclosures, licenses, and spinoffs has continued apace, our analyses of technology transfer outputs finds that little relative improvement has been made in the proportion of university disclosures that become licensed technologies, an outcome one might not expect given the aforementioned investment in infrastructure.

The extant research points to three possible explanations. First, the technology transfer metrics collected by AUTM do not necessarily provide a clear picture of the impact of technology transfer. For example, government and university leaders often cite the number of new spinoff companies established from universities—data collected by AUTM—as evidence of economic development. However, these figures give us little indication as to the growth, survival, and economic impact of spinoffs. Recent research finds, in fact, that many university spinoffs generate little economic activity and produce no tangible outcomes.

Second, the technology transfer infrastructure may not be what is most needed to accelerate commercialization and entrepreneurship. Twenty years of empirical research shows that the success of incubators, science parks, and early-stage capital funds is mixed. Of course, the efficacy of these services depends critically on how they are implemented, where, and by whom. At worst, recent research shows that services administered by some universities can have a detrimental impact on post-spinoff technology commercialization.

Finally, our own research and reviews of the extant empirical literature find that one of the most important factors affecting technology commercialization may be the most overlooked: the background, behaviors, and networks of individual university researchers. Faculty researchers typically have little experience or training in technology development or entrepreneurship. University researchers are trained by other university researchers and develop professional networks of individuals with training, experiences, and goals similar to their own. The downside is that they become locked into social networks that lack representation from other professions and groups. Thus, when faculty members discover a new technology, not only do they not have the background to understand and develop its potential utility, they also do not have a network of individuals with the financial, entrepreneurial, or technical background to help them do so.

Conversely, studies show that researchers who have experience working in or consulting to industry have a better track record at commercializing new technologies and are more likely to establish a spinoff company. Technology commercialization is a team endeavor, and the experience of working with industry or previous attempts to spinoff a company introduce otherwise sheltered academic researchers to a new world of technologists, professional managers, funders, accountants, attorneys, and regulators who can provide useful knowledge, services, and resources important for technology commercialization and entrepreneurship. Without understanding the extant realities of academic culture, including the professional motivations, backgrounds, and training of individual researchers, almost any technology development infrastructure is sure to fail.

PoCCs

Whereas early technology development infrastructure efforts focused on creating physical spaces, such as incubators and science parks, for technology development activity, PoCCs focus further upstream on the individual university researcher. As mentioned, PoCCs are a collection of services, tools, and resources designed to enable individual university researchers to bridge the gap between discovery and further technology development.

The first PoCCs included the Von Liebig Center at the University of California at San Diego and the Deshpande Center at the Massachusetts Institute of Technology, founded in 2001 and 2002, respectfully. Both centers were established with the help of entrepreneur-philanthropists who believed that what was really missing at these universities was a way to not only support already-entrepreneurial faculty, but also to accelerate the cultural transformation of these institutions.

In 2008, David Audretsch and Christine Gulbranson, both affiliated at that time with the Ewing Marion Kauffman Foundation, published a widely discussed article introducing the first PoCCs as institutions ‘‘devoted towards facilitating the spillover and commercialization of university research.’’ They found that the two PoCCs provided faculty with entrepreneurship classes, modest seed grants, and—perhaps most important—coaches with experience developing technologies and establishing companies. Although both efforts were relatively modest, their strength lies in creating relationships between well-respected scientific institutions and robust entrepreneurial communities within the surrounding regions. In other words, the creation of productive relationships is valued more than a specific outcome metric.

As of the end of 2012, at least 30 additional PoCCs had been established. To our surprise from this inventory, PoCCs offer a range of services and focus areas almost as varied as the centers themselves. Some centers provide financial capital, others provide human capital, and still others simply network the relevant actors.

For example, the Boston University-Fraunhofer Alliance for Medical Devices, Instrumentation and Diagnostics, founded in 2007, partners Boston University researchers with Fraunhofer Institute engineers to accelerate the development of medical innovations. The QED Proof of Concept Program, founded in 2009 and housed at the University City Science Center in Philadelphia, provides seed money to help promising technologies bridge the so-called valley of death. And the University of Southern California Stevens Institute for Innovation, founded in 2007, networks student innovators and faculty with external startup mentors and funding sources. The Maryland Proof of Concept Alliance, founded in 2010 at the University of Maryland, has a similar mission.

Perhaps even more interesting is the recent state-wide PoCC initiative by the New York State Energy Research and Development Authority (NYSERDA). NYSERDA funds technology development and entrepreneurship from a small “tax” on electricity use—a so-called system benefits charge added to each individual power bill in the State of New York. In 2013, NYSERDA made awards to three different applicants: Columbia University, New York University (NYU), and a consortium of schools and groups around Rochester. NYSERDA created the PoCC program as part of a larger technology-development strategy in the clean-energy field that includes early-stage gap funding, incubators, and a state-wide entrepreneur-in-residence program, among other services. The NYSERDA PoCC program seeks to tie these disparate programs together in order to accelerate the commercialization of university technologies. Interestingly, once the awards were made, Columbia and NYU joined together to form a Power Bridge consortium to focus on building additional scale in New York City.

Other states also perceive the benefits of PoCCs. For example, Colorado Governor John Hickenlooper recently supported his state’s General Assembly in the passage of the Advanced Industries Accelerator Act to promote entrepreneurship and technology commercialization in advanced industries through proof-of-concept research.

Although our understanding of the effectiveness and structure of PoCCs is in its infancy, there are some indicators that such infrastructure might be an invaluable investment for universities and their stakeholders. Among the 32 active university-affiliated PoCCs we identified, there is systematic evidence that university startups increased after the university became affiliated with a PoCC. We discussed previously the challenges of using blunt metrics to gauge technology transfer success, yet startups at least serve as a proxy for the diffusion of innovations important to regional growth and development.

Our intent here is not to suggest that PoCCs are a panacea for broader institutional and regional technology commercialization and entrepreneurship strategies, but it is to say that the recent flurry of policy interest and activity is a convincing call for further systematic investigation of the structure and economic impact of these centers.

Christopher S. Hayter ([email protected]) is an assistant professor at the Center for Organization Research and Design located within the School of Public Affairs at Arizona State University. Albert N. Link ([email protected]) is a professor of economics at the University of North Carolina at Greensboro.

Daring to Lead: Bringing Full Diversity to Academic Science and Engineering

Academic science and engineering education in the United States has become more open and diverse, but science policy officials and higher education leaders should not spend too much time on self-congratulation. The chief result of diversity and inclusiveness efforts, surely a vitally important one, is that there are now substantially more middle-class white women, Asians, and the foreign born among the scientific ranks. However, progress for members of other underrepresented minority groups, including blacks, Hispanics, and Native Americans, remains modest. Furthermore, the outcomes for recruitment of poor and working-class people into science and engineering can best be described as pitiable.

For people who fall in both camps—underrepresented minorities who are also poor, working poor, or working class—the picture is bleaker still. Importantly, these “dually disadvantaged” collectively comprise the largest group left out of the expanding roster of people working in or training for careers in science, technology, engineering, and mathematics—the STEM fields. Although confronted with many of the same historical, social, and cultural barriers faced by more successful groups, such as women, the dually disadvantaged often face obstacles not common among more affluent persons, including more affluent minorities. These barriers have been widely documented, including poorer quality schools, lack of educational role models, oftentimes family and cultural traditions not emphasizing educational attainment, and the lack of financial wherewithal to obtain a quality higher education. Given that women’s progress in the sciences and engineering has come only after the expenditure of much time, political effort, and financial resources, the mind boggles when contemplating the steps required to afford equal career opportunities to the nation’s poor children of any race or ethnicity. Considering that income inequality in the United States continues to grow, as does the percentage of the population belonging to a minority group, then one must ponder whether the nation is up to the challenge.

Indeed, we are not confident of the prospects for the dually disadvantaged. We are convinced that the continued success of U.S. academic science and engineering depends on the nation’s ability to make strides rapidly in remedying their exclusion. Developing and using the scientific and technical potential of the more than 50% of U.S. citizens under age 25 who are poor, working poor, and working class will take vastly greater resources and commitment than has thus far been devoted to “diversity.” As our overview of the structural barriers operating in U.S. higher education suggests, the journey will prove long and arduous. We think there are some steps that policymakers and educators can take to set the nation on a promising path.

Overcoming structural barriers

During the first half of the 20th century, the principle of numerus clausus—in essence, quotas—governed admission policies at the nation’s universities. Initially, such policies were motivated by the desire to limit the number of Jewish men admitted. Later quota policies targeted women, blacks, and members of other minority groups.

By the 1960s, however, things had begun to change in higher education through court intervention, though sometimes preceded by social violence. In some cases, elimination of structural barriers came quickly. By the passage of the Civil Rights Act of 1964, which granted enforceable federal civil rights to people irrespective of “race, color, religion, sex or national origin,” all of the flagship institutions of the South had officially desegregated. Similarly, the last university to remove its religious quota was Yale, in 1966. Further, the Immigration and Nationality Act of 1965 eliminated restrictions on non-European immigration to the United States, establishing opportunities for immigrants from Asia, Africa, and Latin America—with explicit preferences for the highly educated and skilled.

By contrast, throughout the 1960s, universities continued to limit or refuse women admission and academic employment. It was not until 1972 that federal law specified through Title IX of the Education Act that publicly supported universities had to admit women. Still, Columbia University admitted its first woman undergraduate students only in 1983, and it was 1996 when the last state-supported military academy admitted a female cadet. In brief, it took decades for regulations and implementation to work their way through the entirety of the science system. Overall, major institutions of higher education had by the late 1990s eliminated policies restricting or severely limiting the enrollment of women and members of racial and ethnic minority groups.

Bozeman_fig1

With the doors opened, people came. Jewish scientists were fully incorporated into the science system, and Jews now constitute a greater percentage of the professoriate than their population representation of 2% would predict. Similarly, Asians—particularly those foreign born—represent a percentage of the professoriate much greater than their representation in the general population. The foreign born constitute 30% of doctoral-trained scientists and engineers in the United States. Women (at least white women) have been incorporated into the scientific system at a steadily improving rate since the 1970s, though they still remain underrepresented in most fields. This is partly due to some continuing structural barriers. For example, many universities lack clear policies related to maternity leave, which disadvantages women professors relative to their male colleagues. The steady upward trends in representation for women, Asians, and the foreign born among full-time faculty in STEM fields at U.S. universities are illustrated in Figure 1. The statistics also reveal, however, the continuing problems among members of underrepresented minority groups (URM). The decision by the National Science Foundation to stop disaggregating by Hispanic, Native, and black and to combine them into one URM category does not change the trend.

Bozeman_fig2

Lest critics argue that recent interventions are solving the problem by attracting more members of underrepresented groups into the educational pipeline supplying STEM fields and keeping them engaged, we offer Figure 2. Note that women and temporary residents have rapidly increased in the proportion of STEM doctoral recipients. Asians who are citizens or permanent residents are represented roughly proportional to their representation in the U.S. population. By contrast, the representation of blacks, Hispanics, and Native Americans has remained flat, despite the fact that by 2012, black people comprised 14% and Hispanics comprised 21% of all citizens aged 18-24.

The missing construct: class

The success of members of religious minority groups, the triumph of the foreign born, and the incomplete but impressive success of women being integrated into the academic science system naturally raises this question: Why have similar rates of change not been observed among members of domestic racial and ethnic minority groups despite decades of Affirmative Action and special programs to increase racial and ethnic diversity in STEM? The answer, we argue, may relate to class dynamics. Compared with Europeans, the people of the United States have always been resistant to notions of class—and especially to the idea that class represents a structural barrier to success. The mythos is that any person can succeed if only he or she takes advantage of the abundance of opportunity in the United States and works hard. Our analysis suggests that this may need reconsideration.

Simply put, the reason is that race, ethnicity, and socioeconomic status are historically so strongly linked in the United States that significant progress in representation of members of minority groups in STEM education and careers cannot be made until structural socioeconomic barriers are addressed. This is not to suggest, of course, that there were not Jewish immigrants who were also poor. Rather, it is to say that once the religiously based policy barriers were eliminated, there were a great number of Jews who were already educationally, socially, culturally, and economically poised to take advantage of their new educational and occupational opportunities. Similarly, as structural barriers to women’s participation were dismantled, a large percentage of women were socially, culturally, and economically poised to take advantage of their new occupational opportunities. The same is true for recent immigrants and their children, who have been disproportionately selected by federal policy from the most educationally and economically advantaged of their sending nations.

By contrast, members of racial and ethnic minority groups continue to live at high levels of socioeconomic and educational disadvantage. The calculation of the poverty line has remained the same since 1963, and it is low; in 2013, a family of four with an annual household income less than $23,550 lived below the poverty line. Consider Figure 3, which depicts child poverty rates in the United States by race and ethnicity since 1959. In the first year, 27% of children lived in poverty. The Census Bureau began to disaggregate black children in 1965, when black child poverty stood at 66%, dropping to 51% in 1966 and to 47% in 1965. Note the drop in “all child” poverty over the same short period—the result of two major structural reforms in the United States: civil rights legislation, and the War on Poverty. By 1974, the Census Bureau was disaggregating children by white, black, and Hispanic. Note that black child poverty remained above 40% until 1996, while Hispanic (any race) poverty remained above 27% throughout the period. By contrast, fewer than 14% of non-Hispanic white children lived below the poverty line for the entire period, with the total often dropping below 10%. The Census Bureau began tracking Asian children in 1987. The poverty rate among this group was first measured at 23%, dropping to 9% in 2013, below white child poverty.

To summarize the situation in 2013, one in five children lived below the poverty line, but this level of deprivation varied by race and ethnicity. Fewer than 10% of white and Asian children lived below the poverty level. By contrast, 38% of black children and 30% of Hispanic children lived in poverty. To translate these percentages into numbers, there were 3.9 million poor white children in 2013, 4.2 million poor black children, 5.3 million poor Hispanic (any race) children, and half a million poor Asian children—for a total of over 14 million children living below the poverty line. Tens of millions more children were near poor, living below 200% of the poverty level.

Why does poverty in childhood matter? The research on the effects of child poverty is compelling. Poor children are more likely than their non-poor counterparts to be malnourished. They are less likely to live in a house that contains a book, less likely to have a parent who reads to them, and have significantly smaller vocabularies. They are more likely to live in substandard housing in dangerous neighborhoods. They are more likely to attend failing schools, where most of their peers are also poor. They are less likely to graduate from high school, to matriculate to college, or to complete a college degree. How, exactly, is the nation supposed to produce scientists from hungry children who cannot read when they get to school, and who then attend a failing school with other hungry children living in dangerous places?

The key socioeconomic barrier to equal high-quality education lies in the system of school financing that advantages the affluent (who are disproportionately white and well educated, a result of past structural advantage) while disadvantaging the poor (who are disproportionately members of ethnic and racial minority groups). Hence, a cycle of racially structured socioeconomic disadvantage repeats itself even through decades of Affirmative Action and special targeted programming for members of underrepresented groups. The bottom line is that students need to be academically engaged and prepared at the elementary and secondary levels to enjoy any hope of success in STEM education. Unfortunately, the nation’s collective willingness to deliver such education to all of its children is demonstrably lacking.

The primary failure of higher education policy is not a lack of integration at the university level. Rather, it is the failure of policymakers to take any kind of leadership role in insisting that the farm teams of higher education—the preschools, kindergartens, and elementary and secondary schools—perform at a high level, and that all of the nation’s “human capital” is adequately housed, fed, and safe. If other countries are out-performing the United States in their children’s scientific preparation and accomplishments, perhaps it is because they are also vanquishing child poverty and its attendant ills. Consider the child poverty levels—defined as living below 50% of median household income of two-parent families—recorded by the Luxembourg Income Study: Canada (10.6%), France (6.8%), the Netherlands (2.9%), the United Kingdom (11.6%), and the United States (15.2%). These are all countries of the Organization for Economic Co-operation and Development (OECD), and all are societies characterized by high levels of ethnic, racial, and cultural diversity, making the comparisons of childhood poverty levels especially telling.

Beginning list of fixes

The question, then, is what should these various observations mean in terms of rethinking public policy and improving STEM education in the United States? In answer, it may be useful to consider how policy and rhetoric for STEM human resources are now characterized.

The leitmotif focuses on impending, but never quite reached, dire shortages of scientists and engineers in this or that field crucial to the national interest and economic competitiveness. Less often, but importantly, “what about us?” themes occur, suggesting that the problem is not only one of fragile supply dynamics but also inclusion and diversity, typically defined in terms of the inability of well-qualified minorities to thrive. Recently, and appropriately, issues related to immigrant workforce contributions have received increasing scrutiny. Does the impact of class and its relationship to STEM opportunity come up? Rarely.

It is easy to see why people might choose to ignore the problems of the dually disadvantaged. First, and most unkindly, it is not about “people like us”—political, business, and higher education leaders. Members of these groups (and we are among them) may in theory feel quite beneficent toward the dually disadvantaged, but we typically do not choose to live near them, send our children to school with them, or work with them. For the most part, the dually disadvantaged do not populate even the first-year classes of universities. Universities tend to increase their “quality” by being more exclusive, moving up in the rankings as they enroll a higher percentage of the best and the brightest, defined in terms of higher standard scores, higher grades, more AP classes, more impressive active-learning internships, and, in general, the accomplishments based largely on assets not often available to the dually disadvantaged.

A second reason scarcity of the dually disadvantaged in the STEM pipeline has been largely ignored is that the problem overwhelms. Is it reasonable to address a middle-range social problem when it clearly has its roots in much more fundamental social and political issues? When the nation faces rampant income inequality, failing schools, and the incarceration of large swaths of young dually disadvantaged males, why give a moment’s thought to the (relatively) unimposing fact that the chances of the dually disadvantaged arriving at a STEM faculty position is roughly equivalent to their chances of being struck by lightning?

Here is one rationale. In the past, science and technology policy has succeeded in leading, not simply biding time until the planets travel into perfect alignment. In problems as diverse as national defense and security, energy shortages, and public health crises, STEM leaders developed and executed strategies, oftentimes from the bottom up rather than standing on the sidelines waiting for political leadership. One relatively recent example: in the early 1980’s, the science, technology, and higher education establishment had convinced policymakers of its crucial role in national economic progress, rationalizing continued high levels of federal funding. STEM leaders are the ones who helped frame the issues and helped fashion the policy agenda.

What if STEM leaders proved equally bold in addressing a new “competitiveness crisis,” the crisis of inequalities of income, opportunity, and education? How might one begin? One might start from two directions: first, by taking actions to increase the number of dually disadvantaged who not only enter but succeed in US universities; and second, by increasing at the same time the representation of the dually disadvantaged on science and engineering faculties. The bottom and top prescriptions are interdependent. Society has heard for years from the education establishment that “we would like to recruit more (Hispanics, blacks, women), but there are so few available.” Of course there are few available. They have never entered or dropped out of the STEM educational pipeline. They have taken their human capital to other places in the workforce, or, too often, to more sordid destinations, such as dead-end jobs with no living wage, public assistance for the jobless, or prison.

It will not be enough simply to admit the dually disadvantaged to academe, if institutions then stand by as students with low-quality educational preparation struggle or fail. Universities, including STEM leaders, have a role to play by offering compensatory education to dually disadvantaged students, and then ultimately by employing them. What if the thousands of STEM professors who have discretionary money to engage undergraduates in STEM research vastly increased their commitment to thinking of dually disadvantaged students as potentially the best and the brightest waiting to happen, but who require a little more nurturing and patience? What if the leadership of every major university contemplated the proposition that “diversity is not enough” and measured student progress by social mobility and social transformation? What if, once again, STEM leaders led rather than followed?

We suggest that after addressing a string of such “what ifs,” resolutely and for many years, not only would the STEM pipeline have higher quality flowing through it, but leaders would again have reason to take pride in academia’s prodigious effects on the well-being of the United States and its citizens.

Monica Gaughan ([email protected]) is associate professor in the School of Human Evolution and Social Change and Barry Bozeman ([email protected]) is Arizona Centennial Professor of Public Management and Technology Policy and director of the Center of Organizational Research and Design at Arizona State University.

Closing the Energy-Demonstration Gap

A regional approach to demonstrating the commercial potential of major new energy technologies would open up new opportunities for accelerating innovation.

The high costs and risks of demonstrating new clean energy technologies at commercial scale are major obstacles in the transition to a low-carbon energy economy. To overcome this barrier, we propose a new, decentralized strategy for energy technology scale-up, demonstration, and early adoption, with a greater role for states and regions and a new kind of partnership between the federal government, the states, and private innovators and investors.

The challenges of scaling up new technologies are well known. What works in the laboratory or in a small-scale prototype often doesn’t work nearly as well at full commercial scale, at least initially. Building, operating, and debugging full-scale prototypes invariably reveals new problems that must be solved. Moreover, new technologies are hardly ever deployed in isolation. More often, they must be incorporated into a pre-existing technological and organizational system, and the task of integration is often very demanding. Often, too, complementary technologies such as new manufacturing processes and logistical systems must be developed and scaled up in parallel.

Before a new technology can be commercialized, all of these new elements must be demonstrated in as close to a market setting as possible. The primary objective of a demonstration project is to provide technology developers, investors, and users with information about the costs, reliability, and safety of the new technology in circumstances that approximate actual conditions of use. A successful demonstration resolves technological, regulatory, and business risks to levels that would allow the first few commercial projects to proceed with private investment. In fact, more than one such project may be required, and it is probably more accurate to think in terms of a demonstration “phase,” rather than a single demonstration project.

For many new energy technologies, the challenges of scale-up and demonstration are compounded by the large scale of the projects. For technologies like advanced nuclear reactors and carbon capture and sequestration systems, investments of a billion dollars or more may be required, even for a single demonstration project. The cost of demonstrating new manufacturing processes for biofuels or for distributed energy technologies such as photovoltaic modules may also be in this range. The sheer size of such projects is a deterrent to private investors, and this is exacerbated by the uncertainties involved – not only about the technical and economic performance of the technologies themselves, but also about the new environmental, health, and safety standards and regulations that typically must be developed in parallel, as well as the future market price of competing fuels, and the future regulatory price on carbon emissions. Conventional private financing approaches are poorly suited to this task: venture equity funds are structured to finance high-risk technology development activities but not major, billion-dollar-scale projects, while more traditional project finance investors are well structured to finance assets of this size, but not to take on technology scale-up risk.

In the past, these activities have been financed and sometimes also implemented by the federal government. But the federal role in energy technology demonstrations has had a checkered history. Projects have frequently suffered from administrative and technological failures and have been dogged by political controversies. Today there is no agreed framework for federal involvement in these activities. This is one of the most serious gaps in the current U.S. energy innovation system—especially for large-scale technologies where public risk and cost-sharing at the demonstration stage is unavoidable.

The gap has grown wider because of the current political gridlock on Capitol Hill, which has affected many energy and climate policy initiatives, including proposals to create special federal funds and institutions for energy demonstrations. But in this case the political polarization in Washington may have opened up a new and unrecognized opportunity to solve many of the problems associated with the “demonstration gap.” The approach outlined here would entail the creation of a new, regionally-based funding mechanism to reduce costs and risks and increase the volume of private financing for energy technology demonstrations. It would specifically target projects designed to demonstrate the performance of potentially transformative energy technologies at commercial scale, including nuclear, renewable, carbon capture, and grid upgrading technologies. The public funds would be drawn primarily from state-level electric power system public benefit charges or from state and regional carbon mitigation programs. The state funds would be augmented by supplementary federal grants to incentivize the creation of regional funding pools and partnerships. The regionally aggregated funds would be managed by new Regional Innovation Demonstration Funds (RIDFs), staffed by experienced professional technology and project investors.

We envision a mechanism that would ramp up over time as individual regions opted in, and that could eventually channel more than $10 billion of public and private funds annually into demonstration projects. Our approach would create new opportunities for regional differences in energy innovation needs and preferences to be expressed at the demonstration project selection stage, and it would give states a direct stake in innovation outcomes. Even in a period of flat federal budget expectations and continuing political divisions over climate change, it would generate a steady, predictable stream of funding for what has been a chronically underfinanced stage of the energy innovation system.

A checkered history

The federal government’s role in energy technology demonstrations has a long history. An early example was the Atomic Energy Commission’s promotion of light water reactor and other nuclear power reactor demonstration projects in the 1950s and 1960s. Less successful were subsequent efforts to demonstrate liquid metal fast breeder reactor technology and a range of synthetic fuels technologies in the 1970s and 1980s. A prominent recent example is the on-again, off-again FutureGen project to demonstrate carbon capture and sequestration. (The FutureGen project was cancelled in 2008 but was subsequently reinstated with the help of funds from the American Recovery and Reinvestment Act.) Federal loan guarantees have also been applied to energy demonstration projects over the past decade.

Taking the measure of this history is not easy. The troubled projects and programs have cast long shadows over the decades. But an unsuccessful demonstration is not itself an indicator of failure. One of the main purposes of these projects is to reveal unanticipated obstacles in bringing technologies to commercial scale. The expectation that they should always succeed is misplaced. The bankruptcy of the solar module manufacturer Solyndra in 2011 became a lightning rod for criticism of the federal loan guarantee program, which backed the company and sought to support the demonstration phase of its technology. But the fact of Solyndra’s failure is not a sufficient basis for judging the program’s overall effectiveness. In the last few years, the program has made 33 loan guarantees for approximately $22 billion, covering a wide range of technologies. During this period just three borrowers have defaulted, affecting about 4% of the total loan guarantee value. Indeed, it is quite possible that the program has been too risk-averse to adequately support technology demonstrations, rather than too cavalier in its selections.

On the other hand, assessments of prior Department of Energy (DOE) energy demonstration projects have identified a series of chronic problems, including:

  • a systematic tendency on the part of agency officials to underestimate project costs (perhaps as a requirement to generate political support);
  • a failure to plan for the possibility of future variability in fuel prices (e.g., oil price declines in the case of the synfuels program, and uranium price declines in the case of the Clinch River Breeder Reactor);
  • political interference in technology selection and facility siting decisions and personnel appointments, and Congressional pressures limiting the ability of officials to adjust or terminate projects after conditions have changed;
  • political cycles in Congress and the Executive Branch and the resulting lack of constancy in policy and funding over the life of the projects;
  • funding and management uncertainties generated by the annual budgeting and appropriations process;
  • inefficient business practices mandated by restrictive federal procurement regulations and bureaucratic rules governing human resource management, auditing requirements, and the use of federal facilities;
  • the lack of a clear institutional mission at the DOE and a culture that has focused more on scientific achievement than the commercial and industrial viability of new technologies.

According to one group of knowledgeable observers, “the underlying fundamental difficulty is that the DOE, and other government agencies, are not equipped with personnel or authorities that permit the agency to pursue first-of-a-kind projects in a manner that convincingly demonstrates the economic prospects of a new technology.”

Other federal agencies have done better. The Department of Defense (DOD), and within it, the Defense Advanced Research Projects Agency (DARPA), have had considerable success in demonstrating advanced military technologies that have subsequently been deployed in the field. An important reason for their success is that these demonstration projects have had identifiable clients within DOD itself—high-ranking career officers in the armed services with well-defined military missions and strong motivations to get new weapons systems into the field. The DOD demonstration teams have in turn been strongly motivated to satisfy these clients. The DOE-led demonstration projects have frequently struggled with the need to satisfy political appointees and elected officials; alignment with the actual customers, typically in industry and motivated by market and business considerations, has been weaker.

Post–demonstration energy subsidy programs have also been problematic. Rather than stimulating innovators to bring down the cost of new technologies as quickly as possible, they have sometimes had the opposite effect. Open-ended government subsidies have rewarded firms not for innovating but simply for producing regardless of cost, and the government has often been unable to ratchet down the subsidies in order to drive cost reductions, much less shut down projects and programs in a timely fashion when they have clearly failed to produce the expected results. Probably the most notorious example is the federal tax credit for corn ethanol, finally repealed in 2012 more than three decades after it was first introduced.

Good ideas going nowhere

Several proposals have been advanced to address these problems, though none is being actively pursued today. One would create a new federal financing entity, the Clean Energy Deployment Administration (CEDA), that would give high-risk energy demonstration projects and deployment programs access to various forms of financing, including loans and loan guarantees. CEDA would be a semi-independent unit within DOE.

Another proposal would go further, creating a “Green Bank” as an independent, tax-exempt corporation that would be wholly owned by the federal government. The Green Bank would support diverse technologies and projects through debt financing and credit enhancement, giving priority to those projects that would contribute most effectively to reducing greenhouse gas emissions and oil imports.

A third proposal would establish an autonomous, quasi-public corporation specifically to finance and execute large-scale energy demonstration projects. The corporation would have flexible hiring authority and follow commercial practices in its contracting, and would be governed by an independent board of directors nominated by the president and confirmed by the Senate. Along similar lines, the American Energy Innovation Council, a group of leading business executives, has proposed a public-private partnership to address these problems. Asserting that America’s energy innovation system “lacks a mechanism to turn large-scale ideas or prototypes into commercial-scale facilities,” the council recommended the formation of an independent, federally-chartered corporation, outside the federal government, that would be tasked with demonstrating new, large-scale energy technologies at commercial scale.

A critical task is to devise an innovation system in which multiple pathways can be pursued and failure is tolerable.

Though the details vary, all of these proposals have been designed to overcome the limitations of DOE management and to insulate projects from the political process to some degree. The new entities would be free of many of the most burdensome federal rules. They would also have more flexibility in management and would be independent of the annual congressional budget cycle.

However, none of these proposals has advanced much in recent years. The political stalemate in Washington is an obvious and probably sufficient explanation, but even in its absence the fact that each scheme would require a one-time Congressional appropriation of $10 billion or more to be launched would have been a difficult hurdle to overcome, especially during a period of severe fiscal constraints.

What the states are demonstrating

So the federal demonstration gap remains. Could it be filled by the states? Of course, state (and local) governments have long been active in areas of policy important to energy innovation, including economic regulation of utilities, building codes and standards, and environmental and zoning regulations. California and New York have several decades of experience with large-scale energy deployment programs. Many other states have gotten into the act more recently. Thirty states have adopted renewable portfolio standards, designed to ensure a specified market share from designated energy sources such as solar and wind energy. Many state and local jurisdictions have also introduced tax measures, loan programs, rebates, or other supports for investments in low-carbon energy supplies and energy efficiency. Concerns over climate change have usually been an important motivation for these policies. So too has the goal of new job creation.

We propose to expand the footprint of these state efforts through the creation of a network of Regional Innovation Demonstration Funds (RIDFs). These RIDFs, staffed by experienced technology and project investors, would fund first-of-a-kind large-scale demonstration projects and “next few” post-demonstration projects. The RIDFs would be partly funded by revenues from state public benefit charges. (These charges, also known as system benefit charges, were first applied to consumer electricity bills in many states during electric utility restructuring as a means of ensuring continued funding for energy efficiency and renewable energy deployment as well as low income assistance and weatherization programs.) Another potential source of funding would be state or regional carbon emission reduction programs like the Northeast’s Regional Greenhouse Gas Initiative (RGGI) or the California cap-and-trade program. The governors of the states participating in an RIDF would appoint the members of the fund’s governing board, with representation on the board determined by state contributions to the RIDF funding pool.

Today public benefit charges applied to retail electricity sales are already generating up to $4 billion annually, some of which might be shifted to the RIDFs. Adding a dedicated surcharge of, say, 1% on all U.S. retail electricity sales would generate almost $4 billion more in annual revenues for the RIDFs. Initially only a few states might be willing to redirect existing public benefit charges to RIDF innovation financing or to implement new surcharges for this purpose. As discussed in more detail below, federal matching grants would provide incentives for additional state funding and for the creation of new regional partnerships.

Proposers would seek RIDF funding not as the primary source of finance for their projects but rather as a means of lowering the costs and risks of their own investments. Project teams could include technology vendors, power generators, transmission and distribution utilities, and third-party energy service providers, and might also include national laboratories and universities. The RIDFs would evaluate project proposals partly against standard commercial and financial criteria, including the strength of the project team, the quality of project management, and the extent of self-funding by the proposers. Most important would be the potential of the proposed project to contribute to the reduction of carbon emissions. The most attractive projects would be those with the greatest potential to stimulate major future reductions in carbon emissions while also delivering affordable, secure, and reliable energy services.

Examples of such projects could include demonstrations of integrated carbon capture, transportation and storage systems at full-scale coal and gas-fired power plants and in different geologies; small modular light water or advanced nuclear reactors; grid-scale electricity storage integrated with utility-scale solar or wind systems; and next-generation offshore wind projects. Other eligible projects might include demonstrations of advanced grid infrastructure technologies; community-scale demonstrations of grid-integrated distributed electrical storage using electric vehicles; and test beds for next-generation distribution systems with advanced demand-management technologies, micro-grids, distributed generation, and dynamic and differentiated pricing schemes.

To be eligible to receive RIDF funding, a project would first have to be certified as contributing to the public interest, based on the potential of the technology to achieve significant reductions in carbon emissions. A federal ”gatekeeper” organization, the Energy Innovation Board, would be created for this purpose. The Board would be an independent federal agency. Its role would be to make sure that RIDF investments were supporting the national purpose of reducing carbon emissions. All certified project proposals would have to have the potential to lead to significant reductions in carbon emissions at a declining unit cost over time. The Energy Innovation Board would not determine whether a specific proposal should receive funding, nor would it rank technologies or evaluate the organizational capabilities of the project teams. These tasks would be undertaken by the RIDFs themselves.

Let the regions decide

RIDFs would most likely be established first in parts of the country where there is already a strong commitment to innovation and interstate collaboration, and where there is existing state-level funding. Federal matching grants to the RIDFs, distributed by DOE or by a separate, dedicated agency, would create additional incentives for states to collaborate in funding these regional partnerships.

How a regional structure for energy technology demonstrations would work

Additional details on how the proposed new scheme for selecting, funding, and conducting energy technology demonstrations would work are summarized here:

Regional Innovation Demonstration Funds (RIDFs): Before a project team could seek RIDF funding, the Energy Innovation Board would first have to certify that there was a public interest in the success of the new technology, on the basis of its potential to achieve significant reductions in carbon emissions at a cost competitive with high-carbon incumbent energy systems. The RIDFs would select projects based on the quality of the project team, the strength of its management, and the potential of its technology to lead to major future reductions in carbon emissions while also delivering affordable, secure, and reliable energy services. Projects selected by the RIDFs would receive direct multi-year grants, with out-year funding tied to performance. Alternatively, RIDF funds could be used for customer rebates, subsidized loan programs, credit support for PPAs, or other arrangements designed to promote user engagement with the new technology. As a condition of making a grant, the RIDF would acquire a modest equity position in the project whose ultimate value would depend on the outcome of the project and the subsequent market potential of the project technology. Each RIDF would build a portfolio of project investments distributed across states both inside and outside the RIDF’s own region. Over time some specialization of the RIDFs could be expected to occur in areas of technology of particular interest to their regions—for example, offshore wind in the Northeast, or nuclear in the Southeast, or utility-scale solar PV in the Southwest, or carbon capture in the Midwest.

Energy Innovation Board: The members of the Board would include leading national experts in energy and environmental science and engineering, manufacturing, markets, and business management. The Board would also be able to hire consultants with special expertise to assist on specific matters. The Board’s role would not be to determine whether a specific project proposal should be funded, nor would it rank innovations or evaluate the organizational capabilities of the proposing teams. Those tasks and decisions would be undertaken by the RIDFs themselves. The Board’s role would rather be to pre-certify, decertify, or recertify projects based on its assessment of their potential to contribute to the public goal of creating cost-competitive, scalable technology options for reducing greenhouse gas emissions. Thus the Board would need to be able to evaluate the potential of scale economies and future learning opportunities. It would need to track other projects and programs targeting similar innovations to guard against duplication and overlap (although it would take into consideration the value of pursuing multiple technical approaches in parallel as circumstances warrant.) And it would need to have a global perspective and be knowledgeable about developments overseas, so that RIDF investments would not simply duplicate work being done elsewhere. Certification would only be granted for a limited period—five years, say—and could be withdrawn if progress proved too slow.To encourage effective RIDF investing, the Board would also conduct annual reviews of RIDF portfolios, ranking most highly those combining strong representation of high-potential projects with prompt winnowing of failing projects. The highest-ranked RIDFs would be eligible to receive additional Federal matching funds.

State Trustees: Demonstration funds collected by states would be allocated to the RIDFs by state trustees. To maintain the independence of RIDF investment decisions the state funds would be allocated at the portfolio level, rather than having the trustees fund individual projects. The trustees could be elected or appointed, and would include representatives of business, environmental, and labor groups, as well as technical experts and government officials. The allocation of funds by the trustees would be based on assessments of which of the RIDF project portfolios most closely matched the interests and needs of that state’s residents. In this decentralized scheme, the RIDFs would compete with one another to secure support for their portfolios from state trustee organizations. An RIDF with a portfolio deemed promising by multiple trustees would see its investment budget swell, while those with less promising portfolios would shrink.

Federal matching funds: Federal funds would be provided to the RIDFs according to a pre-determined formula that would match the allocations made by the state trustees. State funds that were independently invested in energy projects would not be eligible for the federal match. Thus the federal funds would incentivize the creation of new RIDFs and as well as additional funding of the RIDFs by the states. Federal funds would also be used to encourage effective RIDF investing by rewarding RIDFs whose project portfolios were ranked highly by the Energy Innovation Board. Disbursement of the federal funds could be administered by the Department of Energy, in lieu of its own demonstration projects, or alternatively by a separate, dedicated agency. In the latter case, there would be no reason why the Department of Energy, and its national laboratories, could not join with private partners in demonstration project teams bidding for RIDF funds.

Over time, a national network of RIDFs might emerge. Certified projects could be proposed to one or more RIDFs for funding. The RIDFs could operate independently, or could co-invest with each other. With time, some specialization of the RIDFs in areas of technology of particular interest to their regions might occur—for example, offshore wind in the Northeast, or nuclear in the Southeast, or carbon capture in the Midwest.

Initially, all of the funds collected in each state would most likely be directed to the RIDF operating in its region, and even in the longer run this might be the typical pattern. But as more RIDFs were established around the country, states could, in principle, allocate funds to other RIDFs. Fund allocation would be the responsibility of a trustee organization in each state.

Implementation: Today about 30 states have implemented power system public benefit charges. The charges range from less than five-thousandths of a cent per kilowatt hour in North Carolina to nearly half a cent per kilowatt hour in California. (For reference, the average retail price of electricity in the United States is roughly 11 cents per kilowatt hour.) Altogether these charges produce revenues of $3.5 billion to $4 billion per year, and the average increase in electricity costs in the affected states is 2.1%. Over time, encouraged by federal matching funds, additional state revenues would likely be raised and more states would participate in the RIDFs. State revenues from existing public benefit charges that were redirected to the RIDFs would not be eligible for the federal match. Some states might elect to apply funds from other sources, such as state or regional carbon cap-and-trade or taxation schemes. (If adopted, the Environmental Protection Agency’s proposed 111(d) rules for limiting carbon emissions from existing power plants are expected to encourage the introduction of more such schemes.)

A dedicated 0.2 cents per kilowatt hour electricity surcharge (adding about 2% to the average U.S. retail price) applied to, say, half of all U.S. retail electricity sales would generate roughly $3.7 billion per year, and might leverage up to twice that amount in private investment funds. A steady, predictable funding stream of more than $10 billion per year in public and private funding dedicated to financing demonstration and “next few” post-demonstration projects—enough to launch several new such projects each year—would be large enough to have a major impact on the nation’s energy innovation challenge and is far larger than currently available funds. (DOE’s entire energy-related budget for research, development, demonstration, and deployment is roughly $5 billion per year.) The magnitude of the needed federal funding is uncertain, but if, say, 50 cents of federal matching funds were required to induce each new dollar of state funding, the federal funding requirement might start at about $200 million per year and would eventually grow to about $1.8 billion per year for a RIDF network covering half the country and deploying a total of $13 billion per year in public and private funds. The net impact on the federal budget would be smaller, and might even yield net savings, as DOE would no longer need to allocate funds to costly demonstration projects.

Competition, not politics

The regionally-based public financing scheme proposed here would have several attractive features. It would create a large, dedicated funding stream for a critical part of the U.S. energy innovation system—full-scale demonstration and early adoption projects—that has been chronically under-resourced until now. RIDF funding decisions would be less susceptible to political influence than federal agency budgets, and would avoid the stop-and-go pattern that is a common feature of the annual federal appropriations process. The RIDFs could be expected to provide the steady, predictable supplementary funding that private investors would need in order to make multiyear investment commitments of their own. By putting RIDF project selection decisions in the hands of experienced technology investment professionals, public funding would be responsive to market needs and the latest technological information, while the public interest would continue to be strongly represented by the Energy Innovation Board and the state trustee organizations.

The new scheme would also introduce multiple levels of competition into the innovation process. In the past, demonstration projects have been selected through a highly centralized and sometimes arbitrary process, in which individual congressional champions (or sometimes national laboratories) have often played very influential roles. In the proposed arrangement, project teams, once certified, would compete with each other for funds from one or more RIDFs to design, construct, and operate demonstration and post-demonstration projects, or to implement early adoption programs. (This more-decentralized scheme would also allow new entrants who may lack connections to the existing federal research and development structure to get a better hearing for their ideas than at present.) The RIDFs, in turn, would compete with one another to secure support for their portfolios from the state trustees and the federal government. An RIDF with a portfolio deemed promising by multiple state trustees would see its investment budget swell, while those with less promising portfolios would shrink. Also, as noted previously, the scheme would create opportunities for regional differences in needs and preferences to be expressed at the demonstration project selection stage, and would give states a direct stake in innovation outcomes. Of course, states where climate change and decarbonizing innovation are low priorities might choose not to participate at all.

The scheme also has a number of drawbacks. Probably the most serious is that it would entail the creation of several new organizations and would take time to set up. But while there is no time to lose in the effort to reduce greenhouse gas emissions, the energy innovation challenge is not one that can be solved overnight. The task is rather to build an innovation system capable of sustaining an accelerated flow of new low-carbon technologies over a period of decades. In this case, although the ultimate goal is to establish a national network of RIDFs, such a network could emerge gradually. Several states have already launched “green banks” or clean energy financing authorities, drawing on a range of funding sources including federal and state grants, bond issues, on-bill repayment mechanisms, and state ratepayer surcharges. Today these initiatives are mostly focused on financing the deployment of proven, commercially available technologies with low technology risk, but a new focus on technology demonstrations, designed to resolve a range of technology-related risks, could be added with modest effort.

Demonstrating diversity

A recurring problem with previous energy technology demonstration projects was not so much that they failed, but that at some point the goal became to avoid failure. For the leaders of these high-profile projects and their supporters in and outside government, the costs of failure were too great, so failure had to be avoided at all costs. But some of the strategies for preventing failure themselves proved costly, including driving out other alternatives prematurely, refusing to recognize legitimate problems until long after they arose, and failing to acknowledge that key assumptions were no longer valid. And these projects also generated a constellation of opponents, whose goal became to cause their failure, and to prevent them from producing anything useful. In this environment, the most important goals of the innovation process—generating new information and learning quickly about the strengths and weaknesses of alternative approaches—were undermined.

For large-scale energy technologies, developed in government-led and government-financed projects, these kinds of problems are ever-present risks. Yet the rapid development and deployment of such technologies will be essential to the low-carbon energy transition. So a critical task is to devise an innovation system in which multiple pathways can be pursued and failure is tolerable. The goal must be to create an institutional structure that can accommodate and promote diversity, experimentation, and competition in the innovation process—even for large-scale technologies and even during the downstream stages of demonstration and early adoption. This structure, moreover, must be robust in the face of likely continuing political divisions over the appropriate response to climate change, and it must be sustainable in the face of strong pressures to reduce federal spending. We propose the formation of RIDFs, led by states, incentivized by the federal government, and monitored and supported in the public interest by a national Energy Innovation Board, as a practical step towards achieving these goals.

Richard K. Lester ([email protected]) is head of the Department of Nuclear Science and Engineering and faculty chair of the Industrial Performance Center at the Massachusetts Institute of Technology. David M. Hart ([email protected]) is professor of public policy and director of the Center for Science and Technology Policy at George Mason University.

Give Genetic Engineering Some Breathing Room

Government regulations are suffocating applications that promise much public benefit. Fixes are available, if society and policymakers would only pay heed to science.

New genetic engineering techniques that are more precise and versatile than ever offer promise for bringing improved crops, animals, and microorganisms to the public. But these technologies also raise critical questions about public policy. How will the various regulatory agencies approach them as a matter of law and regulation? Will they repeat the costly excesses of the oversight of recombinant DNA technology? What will be the regulatory costs, time, and energy required to capture the public benefits of the new technologies? And further out, how will regulatory agencies approach the emerging field of synthetic biology, which involves the design and construction of new biological components, devices, and systems, so that standardized biological parts can be mixed and assembled?

Based on current experience, answers to such questions are not comforting. The regulation of recombinant DNA technology has been less than a stunning success. Most of the federal agencies involved have ignored the consensus of the scientific community that the new molecular techniques for genetic modification are extensions, or refinements, of earlier, more primitive ones, and policymakers and agencies have crafted sui generis, or particular, regulatory mechanisms that have prevented the field from reaching anything approaching its potential.

The regulatory burden on the use of recombinant DNA technology is disproportionate to its risk, and the opportunity costs of regulatory delays and expenses are formidable. The public and private sectors have squandered billions of dollars on complying with superfluous, redundant regulatory requirements that have priced public sector and small company research and development (R&D) out of the marketplace.

These inflated development costs are the primary reason that more than 99% of genetically engineered crops that are being cultivated are large-scale commodity crops—corn, cotton, canola, soy, alfalfa and sugar beets. Hawaiian papaya is one of the few examples of genetically engineered “specialty crops” such as fruits, nuts, or vegetables. The once-promising sector of “biopharming,” which uses genetic engineering techniques to induce crops such as corn, tomatoes, and tobacco to produce high concentrations of high-value pharmaceuticals, is moribund. The once high hopes for genetically engineered “biorational” microbial pesticides and microorganisms to clean up toxic wastes are dead and gone. Not surprisingly, few companies or other funding groups are willing to invest in the development of badly needed genetically improved varieties of the subsistence crops grown in the developing world.

The seminal question about the basis for regulation of genetic engineering in the 1970s was whether there were unique risks associated with the use of recombinant DNA techniques. Numerous national and international scientific organizations have repeatedly addressed this question, and their conclusions have been congruent: There are no unique risks from the use of molecular techniques of genetic engineering.

As long ago as 1982, an analysis performed by the World Health Organization’s Regional Office for Europe reminded regulators that “genetic modification is not new” and that “risks can be assessed and managed with current risk assessment strategies and control methods.” Similarly, the U.S. National Academy of Sciences issued a white paper in 1987 that found no evidence of the existence of unique hazards, either in the use of genetic engineering techniques or in the movement of genes between unrelated organisms.

In perhaps the most comprehensive and unequivocal analysis, the 1989 National Research Council report, “Field Testing of Genetically Modified Organisms,” on the risks of genetically engineered plants and microorganisms, concluded that “the same physical and biological laws govern the response of organisms modified by modern molecular and cellular methods and those produced by classical methods.” But this analysis went further, emphasizing that the more modern molecular techniques “are more precise, circumscribed, and predictable than other methods. They make it possible to introduce pieces of DNA, consisting of either single or multiple genes that can be defined in function and even in nucleotide sequence. With classical techniques of gene transfer, a variable number of genes can be transferred, the number depending on the mechanism of transfer; but predicting the precise number or the traits that have been transferred is difficult, and we cannot always predict the phenotype that will result. With organisms modified by molecular methods, we are in a better, if not perfect, position to predict the phenotypic expression.”

In 2000, the National Research Council released another report weighing in on the scientific basis of federal regulation of genetically engineered plants. It concurred with earlier assessments by other groups that “the properties of a genetically modified organism should be the focus of risk assessments, not the process by which it was produced.”

Various distinguished panels have continued to make the same points about genetic engineering and “genetically modified organisms” (GMOs). In September 2013, the United Kingdom’s Advisory Committee on Releases to the Environment published “Report 2: Why a modern understanding of genomes demonstrates the need for a new regulatory system for GMOs.” The report addressed the European Union’s (EU) regulatory system as applied to new techniques of molecular breeding. This except from the Executive Summary is especially salient: “Our understanding of genomes does not support a process-based approach to regulation. The continuing adoption of this approach has led to, and will increasingly lead to, problems. This includes problems of consistency, i.e. regulating organisms produced by some techniques and not others irrespective or their capacity to cause environmental harm. Our conclusion, that the EU’s regulatory approach is not fit for purpose for organisms generated by new technologies, also applies to transgenic organisms produced by ‘traditional’ GM [genetic modification] technology. . . [T]he potential for inconsistency is inherent because they may be phenotypically identical to organisms that are not regulated.”

There is, then, a broad consensus that process-based regulatory approaches are not “fit for purpose.” Inevitably, they are unscientific, anti-innovative, fail to take into consideration actual risks, and contravene the basic principle that similar things should be regulated similarly. It follows that U.S. and EU systems must be reformed to become scientifically defensible and risk-based.

In theory, the U.S. government accepted the fundamental logic of these analyses as the basis for regulation. In 1986, the White House Office of Science and Technology Policy published a policy statement on the regulation of biotechnology that focused oversight and regulatory triggers on the risk-related characteristics of products, such as plants’ weediness or toxicity. That approach specifically and unequivocally rejected regulation based on the particular process, or technique, used for genetic modification. In 1992, the federal government issued a second pivotal policy statement (sometimes known as the “scope document”) that reaffirmed the overarching principle for biotechnology regulation—that is, the degree and intrusiveness of oversight “should be based on the risk posed by the introduction and should not turn on the fact that an organism has been modified by a particular process or technique.”

Thus, there has been a broad consensus in the scientific community, reflected in statements of federal government policy going back more than 20 years, that the newest techniques of genetic modification are essentially an extension, or refinement, of older, less precise and less predictable ones, and that oversight should focus on the characteristics of products, not on the processes or technologies that produced them.

In spite of such guidance, however, regulatory agencies have generally chosen to exercise their discretion to identify and capture molecular genetic engineering—specifically, recombinant DNA technology—as the focus of regulations. Because the impacts of their decisions have drastically affected the progress of agricultural R&D, this cautionary tale is worth describing agency by agency.

A cautionary tale, repeated

The Department of Agriculture (USDA), through its Animal and Plant Health Inspection Service (APHIS), is responsible for the regulation of genetically engineered plants. APHIS had long regulated the importation and interstate movement of organisms (plants, bacteria, fungi, viruses, etc.) that are plant pests, which were defined by means of an inclusive list—essentially a binary “thumbs up or down” approach. A plant that an investigator might wish to introduce into the field is either on the prohibited list of plant pests, and therefore requires a permit, or it is exempt.

This straightforward approach is risk-based, in that the organisms required to undergo case-by-case governmental review are an enhanced-risk group (organisms that can injure or damage plants), unlike organisms not considered to be plant pests. But for more than a quarter-century, APHIS has applied a parallel regime (in addition to its basic risk-based regulation) that focuses exclusively on plants altered or produced with the most precise genetic engineering techniques. APHIS reworked the original concept of a plant pest (something known to be harmful) and crafted a new category—a “regulated article”—defined in a way that captures virtually every recombinant DNA-modified plant for case-by-case review, regardless of its potential risk, because it might be a plant pest.

In order to perform a field trial with a regulated article, a researcher must apply to APHIS and submit extensive paperwork before, during, and after the field trial. After conducting field trials for a number of years at many sites, the researcher must then submit a vast amount of data to APHIS and request “deregulation,” which is equivalent to approval for unconditional release and sale. These requirements make genetically engineered plants extraordinarily expensive to develop and test. The cost of discovery, development, and regulatory authorization of a new trait introduced between 2008 and 2012 averaged $136 million, according to Wendelyn Jones of DuPont Pioneer, a major corporation involved in crop genetics.

APHIS’s approach to recombinant DNA-modified plants is difficult to justify. Plants have long been selected by nature, as well as bred or otherwise manipulated by humans, for enhanced resistance or tolerance to external threats to their survival and productivity, such as insects, disease organisms, weeds, herbicides, and environmental stresses. Plants have also been modified for qualities attractive to consumers, such as seedless watermelons and grapes and the tangerine-grapefruit hybrid called a tangelo.

Along the way, plant breeders have learned from experience about the need for risk analysis, assessment, and management. New varieties of plants (whichever techniques are used to craft them) that normally harbor relatively high levels of various toxins are analyzed carefully to make sure that levels of those substances remain in the safe range. Celery, squash, and potatoes are among the crops in need of such attention.

The basic tenets of government regulation are that similar things should be regulated similarly, and the degree of oversight should be proportionate to the risk of the product or activity. For new varieties of plants, risk is a function of certain characteristics of the parental plant (such as weediness, toxicity, or ability to “outcross” with other plants) and of the introduced gene or genes. In other words, it is not the source or the method used to introduce a gene but its function that determines how it contributes to risk. Under USDA and APHIS, however, only plants made with the newest, most precise techniques have been subjected to more extensive and burdensome regulation, independent of the risk of the product.

Under its discriminatory and unscientific regulatory regime, APHIS has approved more than 90 genetically engineered traits, and farmers have widely and quickly adopted the crops incorporating them. After the cultivation worldwide of more than 3 billion acres of genetically engineered crops (by more than 17 million farmers in 30 countries) and the consumption of more than 3 trillion servings of food containing genetically engineered ingredients in North America alone, there has not been a single documented ecosystem disruption or a single confirmed tummy ache.

With this record of successful adoption and use, one might have thought that APHIS would reduce its regulatory burdens on genetically engineered crops, but there has been no hint of such a move. APHIS continues to push the costs for regulatory compliance into the stratosphere while its reviews of benign new crops become ever more dilatory: Evaluations that took an average of six months in the 1990s now take three-plus years. APHIS’s performance compares unfavorably with its counterparts abroad. Based on data gathered by the U.S. government and confirmed by industry groups, from January 2010 through June 2013, the average time from submission to decision was 372 days for Brazil and 771 days for Canada, versus 1,210 days for the United States.

APHIS has not shown any willingness to rationalize its regulatory approach—for example, by creating categorical exemptions for what are now known scientifically, and proven agronomically, to be negligible-risk genetically engineered crops. By creating such categorical exemptions, APHIS would simultaneously reduce its workload, lower R&D costs, spur innovation, and avoid the pitfalls of the requirements of the National Environmental Policy Act (NEPA). NEPA requires that agencies performing “major federal actions,” such as APHIS’s approvals, proceed through a succession of procedural hoops. Allegations from activists that regulators have failed to do so have tied up approvals in the federal courts, creating a litigation burden for regulators, scientists, and technology developers. (Regardless of their risk, the vast majority of plants “engineered” through more conventional genetic manipulation, such as crop breeding, do not require APHIS approval and, consequently, are not subject to NEPA or to the derivative lawsuits.)

The regulatory obstacles that discriminate against genetic engineering impede the development of crops with both commercial and humanitarian potential. Genetically engineered crops foreseen in the early days of the technology have literally withered on the vine as regulatory costs have made testing and commercial development economically unfeasible. In a 2010 letter to Nature Biotechnology, Jaime Miller and Kent Bradford of the University of California, Davis, described the impact of regulations on genetically engineered specialty crops (fruits, vegetables, nuts, turf, and ornamentals). They provided citations to 313 publications relating to 46 species and numerous traits beneficial to consumers, farmers, and the environment. However, they pointed out that only four of these crops had entered commercial cultivation in the United States, and none of them had reached the public outside of the United States (though the status of two in China was unclear). Of greater concern, they found that no genetically engineered specialty crop had been granted regulatory marketing approval anywhere since the year 2000. In supplementary data cited in their letter, Miller and Bradford provided information on 724 genetically engineered specialty plant lines that have been created but never commercialized.

Since the advent of recombinant DNA techniques in the 1970s, other newer, even more precise technologies for genetic engineering have been introduced to create organisms with new or enhanced traits. These approaches include, among others, RNA interference technology (RNAi) and the alteration of genes using so-called transcription activator-like effector nucleases (TALENs). Initially, APHIS had issued letters indicating that many crops developed through these newer techniques fall outside of the definition of a “regulated article” under the Plant Protection Act. But under pressure from anti-biotechnology groups, APHIS has also floated the idea that these crops could be captured for oversight as “noxious weeds” if they are invasive (e.g., turf grass), or cross-pollinate readily (alfalfa). Although the impact of invoking “noxious weed” regulatory authority is not yet clear, designating plants crafted with modern molecular techniques as falling in this category appears to be another example of unscientific, opportunistic regulation that will inhibit innovation.

Tortured statutes

The Environmental Protection Agency (EPA), like the USDA, has tortured its enabling statutes to undesirable effect. The EPA has long regulated field tests and the commercial use of pesticides under the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). In 2001, the agency issued final rules for the regulation of genetically engineered plants and created a new concept, “plant-incorporated protectants” (PIPs), defined as “pesticidal substances produced and used by living plants.” EPA regulation captures pest-resistant plants only if the “protectant” has been introduced or enhanced by the most precise and predictable techniques of genetic engineering.

The testing required for registration of these new “pesticides” is excessive. It includes gathering copious data on the parental plant, the genetic construction, and the behavior of the test plant and its interaction with various species, among other factors. (These requirements could not be met for any plant with enhanced pest-resistance modified with older, cruder techniques, which are exempt from the FIFRA rules.) It should be noted that FIFRA provides a 10-acre research exemption for pesticides, even for extremely toxic chemicals, which does not apply to PIPs.

The EPA then conducts repeated, redundant case-by-case reviews: before the initial trial, when trials are scaled up or tested on additional sites, and again if even minor changes have been made in the plant’s genetic construct. The agency repeats those reviews at commercial scale. The agency’s classification of living plants as pesticides, even though the regulatory term is “plant-incorporated protectants,” has been vigorously condemned by the scientific community. And for good reason, since EPA’s approach has discouraged the development of new pest-resistant crops, encouraged greater use of synthetic chemical pesticides, and limited the use of the newest genetic engineering technology mainly to larger, private-sector developers that can absorb the substantial regulatory costs.

The vast majority of the acreage of plants made with recombinant DNA technology has been limited to huge-scale commodity crops. Even so, and in spite of discriminatory, burdensome regulation, their success has been impressive. Worldwide, these new varieties have provided “very significant net economic benefits at the farm level amounting to $18.8 billion in 2012 and $116.6 billion for the 17-year period” from 1996 to 2012, according to a report by PG Economics, Ltd, titled, “GM Crops: Global Socio-economic and Environmental Impacts 1996-2012, released in May 2014. Under the Toxic Substances Control Act (TSCA), the EPA regulates chemicals other than pesticides. Characteristically, in devising an approach to genetically engineered organisms, EPA chose to exercise its statutory discretion in a way that ignores scientific consensus but expands its regulatory scope. The agency focused on capturing for review any “new” organism, defined as one that contains combinations of DNA from sources that are not closely related phylogenetically. For the EPA, “newness” is synonymous with risk. As genetic engineering techniques can easily create new gene combinations with DNA from disparate sources, EPA concluded that these techniques therefore “have the greatest potential to pose risks to people or the environment,” according to the agency press release that accompanied the rule. Using TSCA, EPA decided that genetically modified microorganisms are “new chemicals” subject to pre-market approval for testing and commercial release.

But the EPA’s statement is a non sequitur. The particular genetic technique employed to construct new strains is irrelevant to risk, as is the origin of a snippet of DNA that may be moved from one organism to another. What matters is its function. Scientific principles and common sense dictate the questions that are central to risk analysis for any new organism. How hazardous is the original organism from which DNA was taken? Is it a harmless, ubiquitous organism found in garden soil, or one that causes illness in humans or animals? Does the added genetic material code for a potent toxin? Does the genetic change merely make the organism able to degrade oil more efficiently, or does it have other effects, such as making it more resistant to being killed by antibiotics or sunlight?

Like APHIS, the EPA ignored the scientific consensus holding that modern genetic engineering technology is essentially an extension, or refinement, of earlier, cruder techniques of genetic modification. In fact, the National Research Council’s 1989 report observed that, on average, the use of the newest genetic engineering techniques actually lowers the already minimal risk associated with field testing. The reason is that the new technology makes it possible to introduce pieces of DNA that contain one or a few well-characterized genes, while older genetic techniques transfer or modify a variable number of genes haphazardly. All of this means that users of the new techniques can be more certain about the traits they introduce into the organisms. The newer genetic engineering techniques allow even greater certainty about the traits being introduced and the precise location of those introduced traits in the genome of the recipient.

The bottom line is that organisms crafted with the newest, most sophisticated and precise genetic techniques are subject to discriminatory, excessive, burdensome, and costly regulation. Research proposals for field trials must be reviewed case by case, and companies face uncertainty about final commercial approvals of products down the road even if the products prove to be safe and effective.

The newest molecular breeding techniques have created anxiety at EPA, where there are internal pressures to declare that all forms of molecular modification create “new chemicals,” which would expand the agency’s regulatory reach still further under TSCA. If EPA were to adopt this “new chemicals” approach, there is legitimate concern that products from these new techniques could face the same fate as recombinant DNA-modified microorganisms: EPA has approved only one such microorganism since it declared them to be new chemicals in 1997.

Concurrently, EPA is considering an expansion of its FIFRA power, perhaps through the concept of “plant regulators,” to capture crops and products from the newest molecular modification techniques. In an EPA document published in May 2014, the agency received advice favoring the treatment of many uses of RNA interference technology as a pesticide, in spite of the testimony of Craig Mello—who discovered RNA interference, which won him the Nobel Prize for Physiology or Medicine in 2006—that the use of RNAi technology per se is inherently of very low risk and should elicit no incremental regulatory oversight. Similarly, James Carrington, president of the Donald Danforth Plant Science Center, testified to the “intrinsic non-hazardous properties of diverse RNA types,” stating that “there is no validated scientific evidence that [RNAi] causes or is even associated with ill effects. . . in humans, mammals, or any animals other than certain arthropods, nematodes, and certain microbes that consume or invade plants.”

Science suggests rational alternatives

There are far more rational—and proven—alternatives to the current unscientific regulation of genetic engineering. Indeed, science shows the way. For more than two decades, the Food and Drug Administration (FDA) has had a scientific, risk-based approach toward “novel foods” made with any technology. Published in 1992, the statement of policy emphasized that the agency’s Center for Food Safety and Nutrition does not impose discriminatory regulation based on the use of one technique or another. The FDA concluded that greater scrutiny is needed only when certain safety issues arise. Those safety issues include the presence of a completely new substance in the food supply, changes in a macronutrient, an increase in a natural toxicant, or the presence of an allergen where a consumer would not expect it. In addition, FDA has properly resisted calls for mandatory labeling of genetically engineered foods as not materially relevant information under the federal Food, Drug and Cosmetic Act, and as not consistent with the statutory requirement that food labeling must be accurate and not misleading. (As discussed above, another scientific and risk-based approach to regulation is the USDA’s long-standing treatment of potential plant pests.)

However, FDA has been less successful with its oversight of genetically engineered animals. In 1993, developers of a faster-maturing genetically engineered salmon—an Atlantic salmon containing a particular Pacific Chinook salmon growth hormone gene—first approached FDA. After 15 years of indecision, in 2008 the FDA’s Center for Veterinary Medicine decided that every genetically engineered animal intended for food would be evaluated as a veterinary drug and subjected to the same premarket approval procedures and regulations as drugs (such as pain relievers and anti-flea medicines) used to treat animals. The rationale offered was that a genetically engineered construct “that is in a [genetically engineered] animal and is intended to affect the animal’s structure or function meets the definition of an animal drug.” But this explanation conveniently ignores the science, the FDA’s own precedents, and the availability of other, more appropriate regulatory options.

Adoption of the FDA’s existing approach to foods (which is far less protracted and intensive than that for veterinary drugs) would have sufficed and should have been applied to genetically engineered animals intended for consumption. Instead, FDA interpreted its authority in a way that invokes a highly risk-averse, burdensome, and costly approach. The impact has been devastating: The FDA has not approved a single genetically engineered animal for food consumption. An entire, once-promising sector of genetic engineering has virtually disappeared.

Genetically engineered animals were first developed 30 years ago in land-grant university laboratories. Those animal science innovators have grown old without gaining a single approval for their work. Many academic researchers who have introduced promising traits into animals have moved their research to other nations, particularly Brazil. Many younger animal scientists have simply abandoned the field of genetically engineered animals. As for the faster-growing salmon, the FDA (and also, recently, the Obama White House) has kept it in regulatory limbo while imposing costs of more than $75 million on its developers. And there appears to be no regulatory resolution in sight for this safe, nutritious, environmentally beneficial alternative to the depletion of dwindling wild stocks of ocean fish.

The types of newer genetic engineering techniques emerging since the days of recombinant DNA technology that yielded the faster-growing salmon seem unlikely to fare any better at FDA. For example, a University of Minnesota animal scientist has used the TALENs technique to edit a gene in the Holstein dairy cattle breed to have the DNA sequence identical to the hornless (polled) trait found in the Angus beef cattle breed. This gene editing results in Holstein cattle which exhibit the hornless (polled) trait. This genetic modification provides greater animal welfare for dairy cattle (i.e., avoidance of dehorning) and greater safety for dairy farmers (i.e., avoidance of being gored). But FDA has refused to consider the genetically engineered Holsteins under the same approach it uses for genetically engineered foods. Rather, FDA has asserted that the genetically engineered Holstein cattle contain a “new animal drug” and that, therefore, the animals cannot be released or marketed until a new animal drug approval is granted.

The federal Fish and Wildlife Service (FWS) offers another example of anti-genetic engineering policies. Beginning in 2006, a nongovernmental health and environmental advocacy organization called the Center for Food Safety initiated a litigation campaign to force FWS to ban genetically engineered organisms from national wildlife refuges. The center argued that permitting the cultivation of genetically engineered crops constituted a “major federal action” that required environmental studies under the National Environmental Policy Act and compatibility studies under the National Wildlife Refuge Systems Act and the National Wildlife Refuge Improvement Act. FWS barely contested these allegations, and its own biologist testified inaccurately that genetically engineered agricultural crops posed significant environmental risks of biological contamination, weed resistance, and damage to soils. Not surprisingly, the courts ruled in the plaintiff’s favor.

Given FWS’s obvious lack of familiarity with genetic engineering and its officials’ apparent unwillingness to do the necessary homework, it is understandable that FWS did not respond appropriately to these court rulings. Instead of using its statutory authority to create categorical exemptions, which would have allowed modern farming practices on refuge lands, FWS banned genetically engineered crops for two years and convened a Leadership Team to determine whether such plants were “essential to accomplishing refuge purpose(s).” On July 17, 2014, FWS answered in the negative. Consequently, beginning January 1, 2016, FWS will ban genetically engineered plants from its refuges. Thus, not only did FWS reject science, but it ignored the enhanced resilience and environmental benefits that genetic engineering can foster.

Epilogue

Is there any reason for optimism about the future? Will reasonableness emerge suddenly in agencies’ oversight of recombinant DNA technology? How will the various regulatory agencies approach the newest refinements of genetic engineering? How will they respond to synthetic biology?

The opportunity costs of unnecessary regulatory delays and inflated development expenses are formidable. As David Zilberman, an agricultural economist at the University of California, Berkeley, and his colleagues have observed, “The foregone benefits from these otherwise feasible production technologies are irreversible, both in the sense that past harvests have been lower than they would have been if the technology had been introduced and in the sense that yield growth is a cumulative process of which the onset has been delayed.”

The nation has already foregone significant benefits because of the over-regulation and discriminatory treatment of recombinant DNA technology. If we are to avoid repeating those mistakes for newer genetic modification technologies and synthetic biology, we must have more scientifically defensible and risk-based approaches to oversight. We need and deserve better from governmental regulatory agencies and from their congressional overseers.

Henry I. Miller ([email protected]), a physician, is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University’s Hoover Institution. He was the founding director of the Office of Biotechnology at the FDA. Drew L. Kershen is the Earl Sneed Centennial Professor of Law (Emeritus), University of Oklahoma College of Law, in Norman, OK.

An Academic House of Cards

In his concise and charming book Sustainable Knowledge, University of North Texas philosopher Robert Frodeman challenges us to rethink what we are doing in academia. His central argument is that our academic knowledge-production activities, both in science and in the humanities, are currently unsustainable. Frodeman argues that the disciplinary structure of the academy, which encourages asking questions ad infinitim about ever more narrow topics, has generated a disengagement between the academy and the society that supports it. The academic system demands escalating resources, as the depths of disciplines are plumbed ever further, even as academics find it increasingly difficult to provide a clear rationale for their work. In a world with limited resources, the system is unsustainable. Frodeman’s challenge is to provide an alternative.

He begins with an account of disciplinary knowledge in general, of how the idea of contemporary academic disciplines formed historically and how they developed into today’s fairly dysfunctional academic system. Why dysfunctional? We live in an academic milieu of ever-increasing specialization, of ever-increasing article (and book) production, of seemingly ever-increasing distance between the knowledge needs of the society in which we live and the knowledge academics produce. It is also an academy under intensifying bureaucratic pressure, where faculty are measured against increasingly rigid performance criteria in a rather desperate attempt to show the worth of academic knowledge production, against the backdrop of tighter resources and acknowledged overproduction of graduate students (particularly in the humanities). Nobody is happy about this.

The cure for what ails academia is, for many, to be found in interdisciplinarity. Frodeman, editor of the Oxford Handbook of Interdisciplinarity and former director of the Center for the Study of Interdisciplinarity (it has since been predictably defunded), is right at home in this territory. He deftly guides us through the movement, including a discussion of the irony of developing a specialized discipline that studies interdisciplinarity. Frodeman recommends against such a move, but acknowledges that the pressure to create silos of expertise—where one can be neatly evaluated by ones’ peers—is difficult to resist. Frodeman argues that we need to push back against the demands for ever-increasing rigor and specialization, and instead seek balance in our knowledge production.

That our knowledge production system is out of balance is hard to dispute. We churn out more and more knowledge disconnected from human problems. The percentage of papers that are rarely cited and little read grows. We produce more students than we can possibly place in jobs that require the expertise we impart. The rubric of infinity, of always having another issue that “needs further research,” the mantra of knowledge for knowledge’s sake, is drowning us.

Sustainable Knowledge Cover Image

Does Frodeman have a life line to throw us? Generally, yes. He argues for knowledge-production systems that aim to be sustainable, that are willing to make the hard choices dictated by limited resources of time, attention, and money. If that is what we need to do for the environment, maybe that is what we need to do to change academia, even with the difficulty of defining precisely what is sustainable. In short, Frodeman is recommending a new way to structure our efforts. He would not dismantle the disciplines, but he would make us more aware of the costs of disciplines, the value of working across disciplines, and the need to engage a broader agenda.

How to do this across all of academia is more than Frodeman can tackle. Appropriately, he sets his sights on his home discipline of philosophy. Here, his vision takes on some teeth. Why do most philosophers write just for other philosophers? Why is the discipline so insular? Frodeman acknowledges that some of his colleagues have tried to break out of the ivory tower, particularly in areas of applied ethics, such as bioethics and environmental ethics. Yet these areas have earned little respect among traditional philosophers. They do not fare well under Frodeman’s critical gaze either. He observes that environmental ethics has failed to gain traction in environmental policymaking, which is dominated by economics. Frodeman suggests it has become too “insular and disciplined” to reach beyond its confines. Bioethics has gained wider traction, but in Frodeman’s view, lost its philosophical soul in the process. The principles of beneficence, autonomy, and justice have become almost dogmatic touchstones that provide disciplinary rigor to bolster bioethicists’ expertise, rather than generating critical insight. Meanwhile, most of the discipline of philosophy just talks to itself.

For Frodeman, the situation is tragi-comic, as the heart of philosophy lies in the possibility of disruptive reflection. Philosophy, at its core, concerns challenging much of the status quo, forcing us to see our accepted practices from a new angle. Philosophers cannot do this when talking primarily to each other, pursuing questions of interest only to other philosophers.

What to do? Frodeman is less dogmatic than pluralist here. In keeping with the theme of balance, he does not want to end disciplinary philosophy, but instead to open it up. He thinks philosophers should try their hand in the field, to get out there and see what happens when philosophical acumen meets the real world. It is an intriguing vision of what philosophy could be, and a challenge that other philosophers are starting to take up. The Public Philosophy Network, the Socially Relevant Philosophy of/in Science & Engineering Consortium, the Stone, the Joint Caucus of Socially Engaged Philosophers and Historians of Science, and the American Philosophical Association Public Philosophy Op-Ed Contest all provide testament that some philosophers are already concerned about disciplinary isolation.

Frodeman challenges philosophers to think not just about what makes philosophy good philosophy, but what philosophy could or should be. Philosophers are very happy to think and write about where the discipline should go internally but are less interested in what the field’s relationship should be to the rest of human endeavors, or what responsibilities academia has to the rest of humanity that supports its efforts. It would be downright unphilosophical to ignore the questions he raises.

How might Frodeman’s concerns and attempts at reform play out in the natural and social sciences? Consider that, in the age of the Internet, the information that makes up disciplinary knowledge is widely accessible. In this age of accessibility, what is expertise for? Rather than see it is a repository of knowledge that will grow as disciplines deepen, we could see expertise as essentially synthetic, that the role of the expert is to answer questions about what all the various studies mean for a given question. Under such a view, expertise is no longer a static authority but a dynamic one that demonstrates its usefulness in a process of engaged querying. This is the kind of expertise that could not be replaced by the Internet, that demands long-term cultivation, and that is worth keeping universities around for.

Ventures in sustainable knowledge would thus continue to cultivate expertise, but it would be expertise that moves beyond disciplinary boundaries and the walls of academia. How to do this in practice remains to be seen. Clearly a balance must be struck between the training and development of scholars who have defined expertise and the kind of flexibility that would allow us to pursue what is societally important, given limited resources. Disciplinary expertise is not without value; it is just not the source of limitless value some academics would claim. But such a balance may not be as difficult to find as it first appears, as practically engaged work can provide crucial disciplinary insight as well. As with environmental issues, we should be looking for the win-win solutions.

One might ask what Frodeman thinks he is doing, adding another piece of academic work to the already overwhelming pile. He is well aware of the challenge of trying to make a real contribution to our understanding of the nature of all disciplines. If there is a widely scattered literature on disciplinarity, how can we judge the quality of Frodeman’s work? Can he successfully communicate substance to diverse audiences? This is the main triumph of the work. At the same time that it is compact and accessible to any undergraduate student, it is deeply challenging to our conception of what we are doing as academics. We should thank Frodeman for asking these questions in such a pointed way.

Is China a Clean-Energy Model?

Global climate negotiations have long been stymied by disagreement between rich and poor countries over who should take responsibility for mitigating greenhouse gas emissions. Less-developed nations tell developed countries, “You created this problem, you do something about it.” They mean that many nations have grown rich from centuries of fossil fuel use and carbon dioxide emissions and should therefore help poor nations to pay for initiatives designed to reduce emissions. Developing countries argue that their priority is on the urgency to provide for basic human needs and lift billions of citizens out of grinding poverty and that developed nations should be willing to give them access to low-emissions energy technology and finance its deployment. Developed nations counter that the developing countries have very inefficient economies and energy systems and would reap broad benefits from investments in high-efficiency, low-emissions energy technology. Climate negotiators on occasion have agreed to provide large sums to support such an effort, but few are surprised when the funds never materialize. The lack of progress following the contentious 2009 Copenhagen Summit is typical of the climate negotiation process.

In her new book, The Globalization of Clean Energy Technology: Lessons from China, Kelly Sims Gallagher tries to move us beyond this divisive debate by examining the extraordinary example of China’s surge to global clean-energy leadership. Gallagher is well-positioned to consider lessons from China. She has spent over a decade in serious study of Chinese energy development and has generated a solid body of scientific publications on the subject. She has spent time in China and has toiled in the technology transfer proceedings of the Intergovernmental Panel on Climate Change (IPCC).

The Globalization of Clean Energy Cover Image

Gallagher’s book suggests that Chinese experience in clean-energy technology development and deployment indicates that concerns about barriers to technology diffusion are exaggerated. The notion that China is an example of clean-energy success will surprise some observers. Many readers are familiar with images of impenetrable smog over China’s big cities, which rank among the world’s most polluted, and the fact that China has surpassed even the United States in annual emissions of carbon dioxide. Yet, independent researchers have documented China’s remarkable achievements in energy conservation. Gallagher also applauds that success and, echoing many renewable energy advocates, suggests that China is a model for developing countries to emulate in acquiring the technology needed in the global fight against climate change. China’s recent bilateral agreement with the United States to meet emissions-reduction goals would seem to validate the notion that China is confident in its ability to respond to climate change without aid from the developed world.

Gallagher attempts to make that case by presenting four original case studies of Chinese clean-energy technology transfer and deployment. Through interviews with foreign and domestic players, she explores the incentives for and barriers to the use and production of solar photovoltaics, batteries for electric vehicles, gas-fired turbines, and coal-gasification systems.

She characterizes solar photovoltaic cells as a successful case of tech transfer, gas turbines as unsuccessful, and batteries for electric vehicles and technologies for coal gasification as mixed. She often draws comparisons with relevant case studies by other analysts, in particular a well-regarded book on wind-turbine power systems by Joanna Lewis of Georgetown University. She explores the extent to which intellectual property laws and lack of access to capital frustrate the use of technology in her four cases and in the reports of other experts. She makes an unequivocal and surprising claim that there are no barriers to finance for Chinese firms. She also finds that foreign and domestic firms have more confidence than one might expect that they can protect their intellectual property. Gallagher suggests that China has demonstrated how developing nations can overcome perceived barriers to technology transfer and company development. The key, she argues, is creating market demand and the policy environment to support it.

Is it that sunny?

It would be challenging to argue that she is wrong. China has reduced its energy intensity at an unprecedented rate for more than two decades. The nation leads the world in wind-turbine construction and solar photovoltaic panel manufacture. China’s success in deploying solar water heating, which Gallagher does not emphasize, is unmatched.

Yet, the full picture of China’s energy system is not so rosy. China recently committed to a major expansion of coal use for power generation and gasification in its arid west. China’s wind and solar power contribute very small shares of its large and growing energy economy. The wind system, as others have demonstrated, is underutilized and inefficient. Some analysts believe the wind-power experience is to be expected in a state-owned and largely unregulated power-transmission monopoly that does not welcome power from independent producers tapping intermittent sources. Solar power installations similarly face a hostile environment in which the state-owned grid companies have neither the incentive nor the inclination to integrate their intermittent output. This is not a model to emulate.

The reality is that recent and foreseeable growth of renewable energy in China has been dominated by large-scale hydroelectric power, raising the question of what exactly is meant by clean, renewable energy. To her credit, Gallagher makes clear in a footnote that she does not include large-scale hydro in her own definition of clean energy, and the book includes numerous allusions to negative effects of renewable energy systems. But the overall message of the book is that China’s renewable energy program is a success, even though her understanding of success differs from the perspective of the Chinese government. She more than once quotes former Prime Minister Wen Jiabao boasting that China has achieved great success in this field when in fact the first source he mentions is hydroelectric power. As it stands, Chinese plans call for something like seven times the equivalent of the Three Gorges Dam to be built by 2025. This fact detracts from the case for China as a model for clean energy development.

The unsuccessful gas turbine case supports Gallagher’s assertion that market development is key. She accurately reports that in distributing its limited natural gas supply China gives a higher priority to its use as a chemical feed stock than as a source of electric power. Although more could have been said about China’s slow acceptance of gas imports and, more problematically, with the development of gas resources independent of the state-owned oil and gas companies, it does not undermine her point. That the chiefs of those companies sit on the Chinese Central Committee suggests that the case studies might have placed “governance” square and center in the “barriers” category. Gallagher does arrive at roughly the same conclusion by stating that it is the failure to create markets that causes failure of tech transfer and technology deployment.

The question of whether rich nations should subsidize developing country investments in clean-energy development will probably not be put to rest by China’s success at finding ways to finance hydropower and wind. Chinese climate negotiators have in fact privately acknowledged for several years that they neither need nor expect funds from the West for clean-energy development. But they quickly point out the inequity of the situation caused by the “opportunity cost” of effectively taxing China’s economy to solve a problem created by the rich. Certainly for the many countries in South Asia and Africa that have not been as successful as China in growing their economies, the opportunity cost of developing clean energy would have direct consequences for the well-being of billions of people.

A closer look at how China has financed energy development illustrates the problem. The low cost of finance for Chinese clean-energy firms is the direct result of what the Peterson Institute’s Nicholas Lardy describes as “financial repression.” This term refers to the regulatory depression of interest rates paid to everyday savers who earn low interest on their bank deposits. Those funds, in turn, become sources of low-cost capital for national priorities such as energy development. When bankers, encouraged and guided by local governments, provide that funding to clean-energy startups and growing companies, the temptation to engage in “crony capitalism” must be overwhelming. The inefficiency, corruption, and consequent political backlash have become urgent challenges for the current Chinese leadership.

Is China a clean-energy model for developing nations to emulate? Gallagher does not present a clear answer. She references China’s unprecedented success in energy efficiency. That success, more than anything else, has given China the confidence to set for itself difficult targets to cap and reduce carbon dioxide emissions. But her case studies show China is unprepared for its new, aggressive push into coal gasification and natural gas imports. Her review of renewable energy reveals the troubling reality that the term “clean energy” means different things to different people.

Gallagher is right that China is a good place for continued study of clean energy and what it means and how it might work. Hers is a useful book for updating the argument on climate response between developing and developed countries.

William Chandler ([email protected]) is a former laboratory fellow with the Pacific Northwest National Laboratory. He co-founded and currently serves as an advisor to Dalian East New Energy Development, Inc., a successful private Chinese joint venture. He recently co-authored China’s Future Generation: Assessing the Maximum Potential for Renewable Power Sources in China to 2050 (Annapolis, MD: Energy Transition Research Institute, 2014, www.etransition.org).

From the Hill – Winter 2015

“From the Hill” is adapted from the newsletter Science and Technology in Congress, ­published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

Future uncertain for COMPETES legislation

The America COMPETES Act first became law in 2007 with the goal of promoting innovation and boosting U.S. global competitiveness. It was reauthorized in 2010 and is once again up for reauthorization. Although the 2007 bill had bipartisan support, division along party lines is hurting chances for a comprehensive 2014 reauthorization.

There are currently four COMPETES bills; the House Republicans initially split the legislation into two separate bills: the Frontiers in Innovation, Research, Science, and Technology (FIRST) Act and the Enabling Innovation for Science, Technology, and Energy in America (EINSTEIN) Act. Democrats in the House and Senate each proposed their own versions. Hence, the outlook for the most recent iterations of the bill is uncertain.

The FIRST Act proposed a two-year reauthorization (FY 2014-FY 2015) for the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), with both agencies receiving a 1.5% increase in FY 2015. The EINSTEIN Act reauthorized the Department of Energy’s Office of Science (OSC) but not the Advanced Research Projects Agency-Energy (ARPA-E), which was created under the 2007 COMPETES bill.

The two versions prepared by the Democrats propose four-year reauthorizations (FY 2015-FY 2019) at higher funding levels. However, these bills differ from one another in a few key areas. The House Democrats’ bill (H.R. 4159) reauthorizes NSF, NIST, and DOE OSC, and focuses on four goals: supporting research, fostering innovation, creating jobs, and improving science, technology, engineering, and mathematics (STEM) education. In order to support research and foster innovation, the bill would increase funding for the three agencies by 5% each year, and it would reauthorize the National Nanotechnology Initiative, ARPA-E, a Regional Innovation Program, and the DOE Innovation Hubs. It would also establish the Federal Acceleration of State Technology Commercialization program in order to “advance United States productivity and global competitiveness by accelerating commercialization of innovative technology by leveraging federal support for state commercialization efforts.” Provisions for job creation in H.R. 4159 would include offering federal loan guarantees to small and mid-sized manufacturers to help them stay competitive, improving NIST’s Manufacturing Extension Partnership program, and helping local governments employ more technologies that improve energy efficiency.

Efforts to support and improve STEM education and the STEM workforce would include establishing an ARPA-ED to invest in R&D for educational technology, providing grants for students who receive STEM-related undergraduate degrees, and increasing participation by women and minorities in STEM fields.

The Senate bill (S. 2757) reauthorizes NSF and NIST from FY 2015-FY 2019, but excludes DOE, which is not within the jurisdiction of the Senate Commerce, Science, and Transportation Committee. The bill would provide annual increases for both agencies at 6.7%. Other goals include improving STEM education, supporting NSF’s social, behavioral, and economic sciences (SBE) directorate, reducing administrative burdens for government researchers, maintaining attendance at science conferences, and supporting NSF’s merit review process.

Like the House Democrats’ bill, S. 2757 prioritizes STEM education and the STEM workforce; the bill directs the National Science and Technology Council to collect input from various stakeholders on the five-year STEM education reorganization that was approved in the 2010 COMPETES Act. The bill would also establish a subcommittee to review administrative burdens on federally funded researchers and issue a report containing recommendations for improving efficiency in the grant submission and review processes. This is likely a response to findings of a recent National Science Board report, which concluded that grant applicants often spend more than 40% of their work time on administrative tasks.

Finally, the Senate offers support and praise for NSF’s merit review process, but does require a report from the agency detailing steps taken to improve transparency and accountability. This appears to be in response to certain provisions in the FIRST Act, which would have required NSF to write a justification for each grant awarded that certifies that the research in question would accomplish at least one of a few specified national goals.

It is this example of policy-related language coupled with low funding levels that has made it difficult to move a bipartisan bill forward in the House. Although the FIRST bill was voted out of both the subcommittee and full committee, the votes fell along party lines and received little support from the scientific community. The EINSTEIN bill received a hearing but was not marked up as a stand-alone bill. That legislation was absorbed into a broader Department of Energy Research and Development Act of 2014, which authorized funding for a range of DOE programs.

In brief

Reps. Rosa DeLauro (D-CT) and Brian Higgins (D-NY) introduced legislation to facilitate funding increases for the National Institutes of Health (NIH). However, the potential for NIH budget growth is currently limited by the tight cap on discretionary spending. The bill, dubbed the Accelerating Biomedical Research Act, would adjust the spending cap to allow for increased NIH appropriations of up to 10% above the current year estimate for two years, and up to 5% thereafter.

The House passed by voice vote the bipartisan Revitalize American Manufacturing and Innovation Act of 2014 (H.R. 2996), introduced by Rep. Tom Reed (R-NY) in partnership with Rep. Joseph Kennedy (D-MA). The legislation would establish a Network for Manufacturing Innovation Program within the National Institute of Standards and Technology with the goal of improving U.S. manufacturing competitiveness.

The House of Representatives passed the American Super Computing Leadership Act of 2014 (H.R. 2495) and the Tsunami Warning, Education, and Research Act (H.R. 5309). The supercomputing bill would require that the Department of Energy develop, through a competitive merit review process, a program for partnerships between national laboratories, industry, and universities for exascale supercomputing research. The tsunami legislation would reauthorize funding for the National Oceanic and Atmospheric Administration’s National Tsunami Hazard Mitigation and Tsunami Research programs.

Agency updates

The Office of Science and Technology Policy released its policy for institutional oversight of life sciences dual-use research of concern (DURC). The policy details the necessary oversight to identify DURC and implement risk-mitigation measures. The policy covers specific types of experiments, such as enhancing the harmful consequences of an agent or toxin for 15 pathogens and toxins, including avian influenza virus. Accompanying the new policy are two complementary documents: A Companion Guide of Tools for the Identification, Assessment, Management, and Responsible Communication of Dual Use Research of Concern and Implementation of the U.S. Government Policy for Institutional Oversight of Life Sciences DURC: Case Studies.

The White House released a National Strategy on Combating Antibiotic-Resistant Bacteria that outlines five goals for combating the spread of resistant bacteria. The goals of the strategy are to: slow the emergence and prevention of their spread; strengthen efforts to identify cases of antibiotic resistance; advance the development and use of rapid diagnostic tests; accelerate basic and applied research of new antibiotics, therapeutics, and vaccines; and improve international collaboration. President Obama signed an Executive Order directing the enactment of the strategy as well as creating a new Task Force for Combating Antibiotic-Resistant Bacteria to be co-chaired by the secretaries of Defense, Agriculture, and Health and Human Services. As part of the overall strategy, the administration is directing the National Institutes of Health and the Biomedical Advanced Research and Development Authority to co-sponsor a $20-million prize for the development of a rapid point-of-care diagnostic test to assist health-care workers. Timed to coincide with the release of the White House strategy, the President’s Council of Advisors on Science and Technology (PCAST) issued its report on Combating Antibiotic Resistance. The report outlines a series of recommendations for the federal government that parallel many of the actions outlined in the White House national strategy. The PCAST report assesses antibiotic resistance within human health care, including prescription overuse; animal agriculture, including promoting animal growth; drug development; and surveillance and response.

Rep. Trent Franks (R-AZ) introduced the Critical Infrastructure Protection Act (H. R. 3410), which would direct the Department of Homeland Security (DHS) to include the threat of electromagnetic pulse events as part of scenario planning, including the role that research and development can play in strategic planning. The bill passed the House on December 1 by voice vote, and will be considered by the Senate next.

Rep. Eric Swalwell (R-CA) introduced the National Laboratories Mean National Security Act (H.R. 3438), which would permit organizations funded by the DHS Urban Areas Security Initiative—a program to help local communities prepare and protect against acts of terrorism—to work with Department of Energy’s national laboratories in their community. The bill passed the House unanimously under a suspension of the rules vote, which requires a 2/3 majority.

On November 26, President Obama signed into law the Traumatic Brain Injury Reauthorization Act of 2014 (S. 2539), introduced by Sen. Orrin Hatch (R-UT), which reauthorizes appropriations for programs and activities at the Department of Health & Human Services relating to the study, prevention, and treatment of traumatic brain injury (TBI). In addition, the bill would direct the agency to improve interagency coordination of federal TBI activities.

Federal budget debate goes down to the wire

With its winter recess approaching and the continuing resolution on the budget about to expire on December 12, Congress continued its practice of just-in-time decisionmaking.

The latest proposal to keep the government from shutting down, while also responding to concerns surrounding the president’s executive action on immigration, is to fund the majority of the federal government via an omnibus bill and extend funding for immigration programs only until next year when the new Congress is in place and able to negotiate with the administration.

No Time for Pessimism about Electric Cars

The national push to adopt electric cars should be sustained until at least 2017, when a review of fed auto policies is scheduled.

A distinctive feature of U.S. energy and environmental policy is a strong push to commercialize electric vehicles (EVs). The push began in the 1990s with California’s Zero Emission Vehicle (ZEV) program, but in 2008 Congress took the push nationwide through the creation of a $7,500 consumer tax credit for qualified EVs. In 2009 the Obama administration transformed a presidential campaign pledge into an official national goal: putting one million plug-in electric vehicles on the road by 2015.

A variety of efforts has promoted commercialization of EVs. Through a joint rulemaking, the Department of Transportation and the Environmental Protection Agency are compelling automakers to surpass a fleet-wide average of 54 miles per gallon for new passenger cars and light trucks by model year 2025. Individual manufacturers, which are considered unlikely to meet the standards without EV offerings, are allowed to count each qualified EV as two vehicles instead of one in near-term compliance calculations.

The U.S. Department of Energy (DOE) is actively funding research, development, and demonstration programs to improve EV-related systems. Loan guarantees and grants are also being used to support the production of battery packs, electric drive-trains, chargers, and the start-up of new plug-in vehicle assembly plants. The absence of a viable business model has slowed the growth of recharging infrastructure, but governments and companies are subsidizing a growing number of public recharging stations in key urban locations and along some major interstate highways. Some states and cities have gone further by offering EV owners additional cash incentives, HOV-lane access, and low-cost city parking.

Private industry has responded to the national EV agenda. Automakers are offering a growing number of plug-in EV models (three in 2010; seventeen in 2014), some that are fueled entirely by electricity (battery-operated electric vehicles, or BEVs) and others that are fueled partly by electricity and partly by a back-up gasoline engine (plug-in hybrids, or PHEVs). Coalitions of automakers, car dealers, electric utilities, and local governments are working together in some cities to make it easy for consumers to purchase or lease an EV, to access recharging infrastructure at home, in their office or in their community, and to obtain proper service of their vehicle when problems occur. Government and corporate fleet purchasers are considering EVs while cities as diverse as Indianapolis and San Diego are looking into EV sharing programs for daily vehicle use. Among city planners and utilities, EVs are now seen as playing a central role in “smart” transportation and grid systems.

The recent push for EVs is hardly the market-oriented approach to innovation that would have thrilled Milton Friedman. It resembles somewhat the bold industrial policies in the post-World War II era that achieved some significant successes in South Korea, Japan, and China. Although the U.S. is a market-oriented economy, it is difficult to imagine that the U.S. successes in aerospace, information technology, nuclear power, or even shale gas would have occurred without a supportive hand from government. In this article, we make a pragmatic case for stability in federal EV policies until 2017, when a large body of real-world experience will have been generated and when a midterm review of federal auto policies is scheduled.

Laurence Gartel and Tesla Motors

Digital artist Laurence Gartel collaborated with Tesla Motors to transform the electric Tesla Roadster into a work of art by wrapping the car’s body panels in bold colorful vinyl designed by the artist. The Roadster was displayed and toured around Miami Beach during Miami’s annual Art Basel festival in 2010.

Gartel, an artist who has experimented with digital art since the 1970s, was a logical collaborator with Tesla given his creative uses of technology. He graduated from the School of Visual Arts, New York, in 1977, and has pursued a graphic style of digital art ever since. His experiments with computers, starting in 1975, involved the use of some of the earliest special effects synthesizers and early video paint programs. Since then, his work has been exhibited at the Museum of Modern Art; Long Beach Museum of Art; Princeton University Art Museum; MoMA PS 1, New York City; and the Norton Museum of Art, West Palm Beach, Florida. His work is in the collections of the Smithsonian Institution’s National Museum of American History and the Bibliotheque Nationale, Paris.

34

Image courtesy of the artist.

Governmental interest in EVs

The federal government’s interest in electric transportation technology is rooted in two key advantages that EVs have over the gasoline- or diesel-powered internal combustion engine. Since the advantages are backed by an extensive literature, we summarize them only briefly here.

First, electrification of transport enhances U.S. energy security by replacing dependence on petroleum with a flexible mixture of electricity sources that can be generated within the United States (e.g. natural gas, coal, nuclear power, and renewables). The U.S. is making rapid progress as an oil producer, which enhances security, but electrification can further advance energy security by curbing the nation’s high rate of consumption in the world oil market. The result: less global dependence on energy from OPEC producers, unstable regimes in the Middle East, and Russia.

Second, electrification of transport is more sustainable on a life-cycle basis because it causes a net reduction in local air pollution and greenhouse gas emissions, an advantage that is expected to grow over time as the U.S. electricity mix shifts toward more climate-friendly sources such as gas, nuclear, and renewables. Contrary to popular belief, an electric car that is powered by coal-fired electricity is still modestly cleaner from a greenhouse gas perspective than a typical gasoline-powered car. And EVs are much cleaner if the coal plant is equipped with modern pollution controls for localized pollutants and if carbon capture and storage (CCS) technology is applied to reduce carbon dioxide emissions. Since the EPA is already moving to require CCS and other environmental controls on coal-fired power plants, the environmental case for plug-in vehicles will only become stronger over time.

Although the national push to commercialize EVs is less than six years old, there have been widespread claims in the mainstream press, on drive-time radio, and on the Internet that the EV is a commercial failure. Some prominent commentators, including Charles Krauthammer, have suggested that the governmental push for EVs should be reconsidered.

It is true that many mainstream car buyers are unfamiliar with EVs and are not currently inclined to consider them for their next vehicle purchase. Sales of the impressive (and pricy) Tesla sports car (Model S) have been better than the industry expected, but ambitious early sales goals for the Nissan Leaf (a BEV) and the Chevrolet Volt (a PHEV) have not been met. General Electric Corporation backed off of an original pledge to purchase 25,000 EVs. Several companies with commercial stakes in batteries, EVs, or chargers have gone bankrupt, despite assistance from the federal government.

“Early adopters” of plug-in vehicles are generally quite enthusiastic about their experiences, but mainstream car buyers remain hesitant. There is much skepticism in the industry about whether EVs will penetrate the mainstream new-vehicle market or simply serve as “compliance cars” for California regulators or become niche products for taxi and urban delivery fleets.

One of the disadvantages of EVs is that they are currently more costly to produce than comparably sized gasoline and diesel powered vehicles. The cost premium today is about $10,000-$15,000 per vehicle, primarily due to the high price of lithium ion battery packs. The cost disadvantage has been declining over time due to cost-saving innovations in battery-pack design and production techniques but there is a disagreement among experts about how much and how fast production costs will decline in the future.

On the favorable side of the affordability equation, EVs have a large advantage in operating costs: electricity is about 65% cheaper than gasoline on an energy-equivalent basis, and most analysts project that the price of gasoline in the United States will rise more rapidly over time than the price of electricity. Additionally, repair and maintenance costs are projected to be significantly smaller for plug-in vehicles than gasoline vehicles. When all of the private financial factors are taken into account, the total cost of ownership throughout the lifetime of the EV is comparable—or even lower—than a gasoline vehicle, and that advantage can be expected to enlarge as EV technology matures.

Trends in EV sales

Despite the financial, environmental, and security advantages of the EV, early sales have not matched initial hopes. Nissan and General Motors led the high-volume manufacturers with EV offerings but have had difficulty generating sales, even though auto sales in the United States were steadily improving from 2010 through 2013, the period when the first EVs were offered. In 2013 EVs accounted for only about 0.2% of the 16 million new passenger vehicles sold in the U.S.

Nissan-Renault has been a leader. At the 2007 Tokyo Motor Show, Nissan shocked the industry with a plan to leapfrog the gasoline-electric hybrid with a new mass-market BEV, called the Fluence in France and the Leaf in the U.S. Nissan’s business plan called for EV sales of 100,000 per year in the U.S. by 2012, and Nissan was awarded a $1.6 billion loan guarantee by DOE to build a new facility in Smyrna, Tennessee to produce batteries and assemble EVs. The company had plans to sell 1.5 million EVs on a global basis by 2016 but, as of late 2013, had sold only 120,000 and acknowledged that it will fall short of its 2016 global goal by more than 1 million vehicles.

General Motors was more cautious than Nissan, planning production in the US of 10,000 Volts in 2011 and 60,000 in 2012. However, neither target was met. GM did “re-launch” the Volt in early 2012 after addressing a fire-safety concern, obtaining HOV-lane access in California for Volt owners, cutting the base price, and offering a generous leasing arrangement of $350 per month for 36 months of use. Volt sales rose from 7,700 in 2011 to 23,461 in 2012 and 23,094 in 2013.

The most recent full-year U.S. sales data (2013) reveal that the Volt is now the top-selling plug-in vehicle in the U.S., followed by the Leaf (22,610), the Tesla Model S (18,000), and the Toyota Prius Plug-In (12,088). In the first six months of 2014, EV sales are up 33% over 2013, led by the Nissan Leaf and an impressive start from the Ford Fusion Energi PHEV. Although the sales at Tesla have slowed a bit, the company has announced plans for a new $5 billion plant in the southwest of the U.S. to produce up to 500,000 vehicles for distribution worldwide.

President Obama, in 2009 and again in his January 2011 State of the Union address, set the ambitious goal of putting one million plug-in vehicles on the road by 2015. Two years after the address, DOE and the administration dropped the national 2015 goal, recognizing that it was overly ambitious and would take longer to achieve. But does this refinement of a federal goal really prove that EVs are a commercial failure? We argue that it does not, pointing to two primary lines of evidence: a historical comparison of EV sales with conventional hybrid sales; and a cross-country comparison of U.S. EV sales with German EV sales.

Comparison with the conventional hybrid

A conventional hybrid, defined as a gasoline-electric vehicle such as the Toyota Prius, is different from a plug-in vehicle. Hybrids vary in their design, but they generally recharge their batteries during the process of braking (“regenerative braking”) or, if the brakes are not in use during highway cruising, from the power of the gasoline engine. Thus, a conventional hybrid cannot be plugged in for charging and does not draw electricity from the grid.

Cars with hybrid engines are also more expensive to produce than gasoline cars, primarily because they have two propulsion systems. For a comparably-sized vehicle, the full hybrid costs $4,000 to $7,500 more to produce than a gasoline version. But the hybrid buyer can expect 30% better fuel economy and fewer maintenance and repair costs than a gasoline-only engine. According to recent life-cycle and cost-benefit analyses, conventional hybrids compare favorably to the current generation of EVs.

Toyota is currently the top seller of hybrids, offering 22 models globally that feature the gasoline-electric combination. To date, the Prius has sold over 3 million vehicles worldwide, and has recently expanded to an entire family of models. In 2013, U.S. sales of the Prius were 234,228, of which 30% were registered in the State of California, where the Prius was the top-selling vehicle line in both 2012 and 2013.

The success of the Prius did not occur immediately after introduction. Toyota and Honda built on more than a decade of engineering research funded by DOE and industry. Honda was actually the first company to offer a conventional hybrid in the U.S.—the Insight Hybrid in 1999—but Toyota soon followed in 2000 with the more successful Prius. Ford followed with the Escape Hybrid SUV. The experience with conventional hybrids underscores the long lead times in the auto industry, the multiyear process of commercialization, and the conservative nature of the mainstream U.S. car purchaser.

Fifteen years ago, critics of conventional hybrids argued that the fuel savings would not be enough to justify the cost premium of two propulsion systems, that the batteries would deteriorate rapidly and require expensive replacement, that resale values for hybrids would be discounted, that the batteries might overheat and create safety problems, and that hybrids were practical only for small, light-weight cars. “Early adopters” of the Prius, which carried a hefty price premium for a small car, were often wealthy, highly educated buyers who were attracted to the latest technology or wanted to make a pro-environment statement with their purchase. The process of expanding hybrid sales from early adopters to mainstream consumers took many years to occur, and that process continues today, fifteen years later.

When the EV and the conventional hybrid are compared according to the pace of market penetration in the United States, the EV appears to be more successful (so far). Figure 1 illustrates this comparison by plotting the cumulative number of vehicles sold—conventional hybrids versus EVs—during the first 43 months of market introduction. At month 25, EV sales were about double the number of hybrid sales; at month 40 the ratio of cumulative EV sales to cumulative hybrid sales was about 2.2. The overall size of the new passenger-vehicle market was roughly equal in the two time periods.

When comparing the penetration rates of hybrids and EVs, it is useful to highlight some of the differences in the technologies, policies, and economic environments. The plug-in aspect of the EV calls for a much larger change in the routine behavior of motorists (e.g., nighttime and community charging) than does the conventional hybrid. The early installations of 220-volt home charging stations, which reduce recharging time from 12-18 hours to 3-4 hours, were overly expensive, time-consuming to set up with proper permits, and an irritation to early adopters of EVs. Moreover, the EV owner is more dependent on the decisions of other actors (e.g., whether employers or shopping malls supply charging stations and whether the local utility offers low electricity rates for nighttime charging) than is the hybrid owner.

The success of the conventional hybrid helped EVs get started by creating an identifiable population of potential early adopters that marketers of the EV have exploited. Now, one of the significant predictors of EV ownership is prior ownership of a conventional hybrid. Some of the early EV owners immediately gained HOV access in California, but Prius owners were not granted HOV lane access until 2004, several years after market introduction. California phased out HOV access for hybrids from 2007 to 2011 and now awards the privilege to qualified EV owners.

From a financial perspective, the purchase of the conventional hybrid and EV were not equally subsidized by the government. EV purchasers were enticed by a $7,500 federal tax credit; the tax deduction—and later credit—for conventional hybrid ownership was much smaller, at less than $3,200. Some states (e.g., California and Colorado) supplemented the $7,500 federal tax credit with $1,000 to $2,500 credits (or rebates) of their own for qualified EVs; few conventional hybrid purchasers were provided a state-level purchase incentive. Nominal fuel prices were around $2 per gallon but rising rapidly in 2000-2003, the period when the hybrid was introduced to the U.S. market; fuel prices were volatile and in the $3-$4 per gallon range from 2010-2013 when EVs were initially offered. The roughly $2,000 cost of a Level 2 (220-volt) home recharging station (equipment plus labor for installation) was for several years subsidized by some employers, utilities, government grants, or tax credits. Overall, financial inducements to purchase an EV from 2010 to 2013 were stronger than the inducements for a conventional hybrid from 2000 to 2003, possibly helping explain why the take-up of EVs has been faster.

Comparison with Germany

Another way to assess the success of EV sales in the United States since 2010 is to compare it to another country where EV policies are different. Germany is an interesting comparator because it is a prosperous country with a strong “pro-environment” tradition, a large and competitive car industry, and relatively high fuel prices of $6-$8 per gallon due to taxation. Electricity prices are also much higher in Germany than the U.S. due to an aggressive renewables policy.

Like President Barack Obama, German Prime Minister Angela Merkel has set a goal of putting one million plug-in vehicles on the road, but the target date in Germany is 2020 rather than 2015. Germany has also made a large public investment in R&D to enhance battery technology and a more modest investment in community-based demonstrations of EV technology and recharging infrastructure.

On the other hand, Germany has decided against instituting a large consumer tax credit similar to the €10,000 “superbonus” for EVs that is available in France. Small breaks for EV purchasers in Germany are offered on vehicle sales taxes and registration fees. Nothing equivalent to HOV-lane access is offered to German EV users yet. Germany also offers few subsidies for production of batteries and electric drivetrains and no loan guarantees for new plants to assemble EVs.

Since the German car manufacturers are leaders in the diesel engine market, the incentive for German companies to explore radical alternatives to the internal combustion engine may be tempered. Also, German engineers appear to be more confident in the long-term promise of the hydrogen fuel cell than in cars powered by lithium ion battery packs. Even the conventional hybrid engine has been slow to penetrate the German market, though there is some recent interest in diesel-electric hybrid technology. Daimler and Volkswagen have recently begun to offer EVs in small volumes but the advanced EV technology in BMW’s “i” line is particularly impressive.

FIGURE 1

37

Another key difference between Germany and the U.S. is that Germany has no regulation similar to California’s Zero Emission Vehicle (ZEV) program. The latest version of the ZEV mandate requires each high-volume manufacturer doing business in California to offer at least 15% of their vehicles as EVs or fuel cells by 2025. Some other states (including New York), which account for almost a quarter of the auto market, have joined the ZEV program. The ZEV program is a key driver of EV offerings in the US. In fact, some global vehicle manufacturers have stated publicly that, were it not for the ZEV program, they might not be offering plug-in vehicles to consumers. Since the EU’s policies are less generous to EVs, some big global manufacturers are focusing their EV marketing on the West Coast of the U.S. and giving less emphasis to Europe.

Overall, from 2010 to 2013 Germany has experienced less than half of the market-share growth in EV sales than has occurred in the U.S. The difference is consistent with the view that the policy push in the U.S. has made a difference. The countries in Europe where EVs are spreading rapidly (Norway and the Netherlands) have enacted large financial incentives for consumers coupled with coordinated municipal and utility policies that favor EV purchase and use.

Addressing barriers to adoption of EVs

The EV is not a static technology but a rapidly evolving technological system that links cars with utilities and the electrical grid. Automakers and utilities are addressing many of the barriers to more widespread market diffusion, guided by the reactions of early adopters.

Acquisition cost. The price premium for an EV is declining due to savings in production costs and price competition within the industry. Starting in February 2013, Nissan dropped the base price of the Leaf from $35,200 to $28,800 with only modest decrements to base features (e.g., loss of the telematics system). Ford and General Motors responded by dropping the prices of the Focus Electric and Volt by $4,000 and $5,000, respectively. Toyota chipped in with a $4,620 price cut on the plug-in version of the Toyota Prius (now priced under $30,000), but it is eligible for only a $2,500 federal tax credit. And industry analysts report that the transaction prices for EVs are running even lower than the diminished list prices, in part due to dealer incentives and attractive financing deals.

Dealers now emphasize affordable leasing arrangements, with a majority of EVs in the U.S. acquired under leasing deals. Leasing allays consumer concerns that the batteries may not hold up to wear and tear, that resale values of EVs may plummet after purchase (a legitimate concern), and that the next generation of EVs may be vastly improved compared to current offerings. Leasing deals for under $200 per month are available for the Chevy Spark EV, the Fiat 500e, the Leaf, Daimler’s Smart For Two EV; lease rates for the Honda Fit EV, the Volt and the Ford Focus EV are between $200 and $300 per month. Some car dealers offer better deals than the nationwide leasing plans provided by vehicle manufacturers.

Driving range. Consumer concerns about limited driving range—80-100 miles for most EVs, though the Tesla achieves 200-300 miles per charge—are being addressed in a variety of ways. PHEVs typically have maximum driving ranges that are equal to (or better than) a comparable gasoline car, and a growing body of evidence suggests that PHEVs may attract more retail customers than BEVs. For consumers interested in BEVs, some dealers are also offering free short-term use of gasoline vehicles for long trips when the BEV has insufficient range. The upscale BMW i3 EV is offered with an optional gasoline engine for $3,850 that replenishes the battery as it runs low; the effective driving range of the i3 is thus extended from 80-100 miles to 160-180 miles.

Recharging time. Some consumers believe that the 3-4 hour recharging time with a Level 2 charger is too long. Use of super-fast Level 3 chargers can accomplish an 80% charge in about 30 minutes, although inappropriate use of Level 3 chargers can potentially damage the battery. In the crucial West Coast market, where consumer interest in EVs is the highest, Nissan is subsidizing dealers to make Level 3 chargers available for Leaf owners. BMW is also offering an affordable Level 3 charger. State agencies in California, Oregon, and Washington are expanding the number of Level 2 and Level 3 chargers available along interstate highways, especially Interstate 5, which runs from the Canadian to the Mexican borders.

As of 2013, a total of 6,500 Level 2 and 155 Level 3 charging stations were available to the U.S. public. Some station owners require users to be a member of a paid subscription plan. Tesla has installed 103 proprietary “superchargers” for its Model S that allow drivers to travel across the country or up and down both coasts with only modest recharging times. America’s recharging infrastructure is tiny compared to the 170,000 gasoline stations, but charging opportunities are concentrated in areas where EVs are more prevalent, such as southern California, San Francisco, Seattle, Dallas-Fort Worth, Houston, Phoenix, Chicago, Atlanta, Nashville, Chattanooga, and Knoxville.

Advanced battery and grid systems. R&D efforts to find improved battery systems have intensified. DOE recently set a goal of reducing the costs of battery packs and electric drive systems by 75% by 2022, with an associated 50% reduction in the current size and weight of battery packs. Whether DOE’s goals are realistic is questionable. Toyota’s engineers believe that by 2025 improved solid-state and lithium air batteries will replace lithium ion batteries for EV applications. The result will be a three- to five-fold rise in power at a significantly lower cost of production due to use of fewer expensive rare earths. Lithium-sulfur batteries may also deliver more miles per charge and better longevity than lithium ion batteries.

Researchers are also exploring demand side management of the electrical grid with “vehicle-to-grid” (V2G) technology. This innovation could enable electric car owners to make money by storing power in their vehicles for later use by utilities on the grid. It might cost an extra $1,500 to fit a V2G-enabled battery and charging system to a vehicle but the owner might recoup $3,000 per year from a load-balancing contract with the electric utility. It is costly for utilities to add storage capacity; the motorist already needs the battery for times when the vehicle is in use, so a V2G contract might allow for optimal use of the battery.

Low-price electricity and EV sharing. Utilities and state regulators are also experimenting with innovative charging schemes that will favor EV owners who charge their vehicles at times when electricity demand is low. Mandatory time-of-use pricing has triggered adverse public reactions but utilities are making progress with more modest, incentive-based pricing schemes that favor nighttime and weekend charging. Atlanta is rapidly becoming the EV capital of the southern United States, in part because Georgia’s utilities offer ultra-low electricity prices to EV owners.

A French-based company has launched electric-car sharing programs in Paris and Indianapolis. Modeled after bicycle sharing, consumers can rent an e-car for several hours or an entire day if they need a vehicle for multiple short trips in the city. The vehicle can be accessed with your credit card and returned at any of multiple points in the city. The commercial success of EV sharing is not yet demonstrated, but sharing schemes may play an important role in raising public awareness of the advancing technology.

The EV’s competitors

The future of the EV would be easier to forecast if the only competitor were the current version of the gasoline engine. History suggests, however, that unexpected competitors can emerge that change the direction of consumer purchases.

The EV is certainly not a new idea. In the 1920s, the United States was the largest user of electric cars in the world, and more electric than gasoline-powered cars were sold. Actually, steam-powered cars were among the most popular offerings in that era.

EVs and steam-powered cars lost out to the internal combustion engine for a variety of reasons. Discovery of vast oil supplies made gasoline more affordable. Mass production techniques championed by Henry Ford dropped the price of a gasoline car more rapidly than the price of an electric car. Public investments in new highways connected cities, increased consumer demand for vehicles with long driving range, and therefore reduced the relative appeal of range-limited electric cars, whose value was highest for short trips inside cities. And car engineers devised more convenient ways to start a gasoline-powered vehicle, which caused them to be more appealing to female as well as male drivers. By the 1930s, the electric car lost its place in the market and did not return for many decades.

Looking to the future, it is apparent that EVs will confront intensified competition in the global automotive market. The vehicles described in Table 1 are simply an illustration of the competitive environment.

Vehicle manufacturers are already marketing cleaner gasoline engines (e.g., Ford’s “EcoBoost” engines with turbochargers and direct-fuel injection) that raise fuel economy significantly at a price premium that is much less than the price premium for a conventional hybrid or EV. Clean diesel-powered cars, which have already captured 50% of the new-car market in Europe, are beginning to penetrate the U.S. market for cars and pick-up trucks. Toyota argues that an unforeseen breakthrough in battery technology will be required to enable a plug-in vehicle to match the improving cost-effectiveness of a conventional hybrid.

Meanwhile, the significant reduction in natural gas prices due to the North American shale-gas revolution is causing some automakers to offer vehicles that can run on compressed natural gas or gasoline. Proponents of biofuels are also exploring alternatives to corn-based ethanol that can meet environmental goals at a lower cost than an EV. Making ethanol from natural gas is one of the options under consideration. And some automakers believe that hydrogen fuel cells are the most attractive long-term solution, as the cost of producing fuel cell vehicles is declining rapidly.

TABLE 1

39

As attractive as some of the EV’s competitors may be, it is unlikely that regulators in California and other states will lose interest in EVs. (In theory, the ZEV mandate also gives manufacturers credit for cars with hydrogen fuel cells but the refueling infrastructure for hydrogen is developing even more slowly than it is for EVs). A coalition of eight states, including California, recently signed a Memorandum of Understanding aimed at putting 3.3 million EVs on the road by 2025. The states, which account for 23% of the national passenger vehicle market, have agreed to extend California’s ZEV mandate, hopefully in ways that will allow for compliance flexibility as to exactly where EVs are sold.

ZEV requirements do not necessarily reduce pollution or oil consumption in the near term, since they are not coordinated with national mileage and pollution caps. Thus, when more ZEVs are sold in California and other ZEV states, it frees automakers to sell more fuel-inefficient and polluting vehicles in non-ZEV states. Without better coordination between individual states and the federal policies, the laudable goals of the ZEV mandate could be frustrated.

All things considered, America’s push toward transport electrification is off to a modestly successful start, even though some of the early goals for market penetration were overly ambitious. Automakers were certainly losing money on their early EV models but that was true of conventional hybrids as well. The second generation of EVs now arriving in showrooms is likely to be more attractive to consumers, since they have been refined based on the experiences of early adopters. And as more recharging infrastructure is added, cautious consumers with “range anxiety” may become more likely to consider a BEV, or at least a PHEV.

Vehicle manufacturers and dealers are also beginning to focus on how to market the unique performance characteristics of an EV. Instead of touting primarily fuel savings or environmental virtue, marketers are beginning to echo a common sentiment of early adopters: EVs are enjoyable to drive because, with their relatively high torque and quiet yet powerful acceleration, they are a unique driving experience.

Now is not the right time to redo national EV policies. EVs and their charging infrastructure have not been available long enough to draw definitive conclusions. Vehicle manufacturers, suppliers, utilities, and local governments have made large EV investments with an understanding that federal auto-related policies will be stable until 2017, when a national mid-term policy review is scheduled.

It is not too early to frame some of the key issues that will need to be considered between now and 2017. First, are adequate public R&D investments being made in the behavioral as well as technological aspects of transport electrification? We believe that DOE needs to reaffirm the commitment to better battery technology while giving more priority to understanding the behavioral obstacles to all forms of green vehicles. Second, we question whether national policy should continue a primary focus on EVs. It may be advisable to stimulate a more diverse array of green vehicle technologies, including cars fueled by natural gas, hydrogen, advanced ethanol, and clean diesel fuel. Third, federal mileage and carbon standards may need to be refined to ensure cost-effectiveness and to provide a level playing field for the different propulsion systems. Fourth, highway-funding schemes need to shift from gasoline taxes to mileage-based road user fees in order to ensure that adequate funds are raised for road repairs and that owners of green vehicles pay their fair share. Fifth, California’s policies need to be better coordinated with federal policies in ways that accomplish environmental and security objectives and allow vehicle manufacturers some sensible compliance flexibility. Finally, on an international basis, policy makers in the European Union, Japan, Korea, China, California and the United States should work together to accomplish more regulatory cooperation in this field, since manufacturers of batteries, chargers, and vehicles are moving toward global platforms that can efficiently provide affordable technology to consumers around the world.

Coming to a policy consensus in 2017 will not be easy. In light of the fast pace of change and the many unresolved issues, we recommend that universities and think tanks begin to sponsor conferences, workshops, and white papers on these and related policy issues, with the goal of analyzing the available information to create well-grounded recommendations for action come 2017.

John D. Graham ([email protected]) is dean, Joshua Cisney is a graduate student, Sanya Carley is an associate professor, and John Rupp is a senior research scientist at the School of Public and Environmental Affairs at Indiana University.

Military Innovation and the Prospects for Defense-Led Energy Innovation

EUGENE GHOLZ

Although the Department of Defense has long been the global innovation leader in military hardware, that capability is not easily applied to energy technology

Almost all plans to address climate change depend on innovation, because the alternatives by themselves—reducing greenhouse gas emissions via the more efficient use of current technologies or by simply consuming less of everything—are either insufficient, intolerable, or both. Americans are especially proud of their history of technology leadership, but in most sectors of the economy, they assume that private companies, often led by entrepreneurs and venture capitalists, will furnish the new products and processes. Unfortunately, energy innovation poses exceptionally severe collective action problems that limit the private sector’s promise. Everyone contributes emissions, but no one contributes sufficient emissions that a conscious effort to reduce them will make a material difference in climate change, so few people try hard. Without a carbon tax or emissions cap, most companies have little or no economic incentive to reduce emissions except as a fortuitous byproduct of other investments. And the system of production, distribution, and use of energy creates interdependencies across companies and countries that limit the ability of any one actor to unilaterally make substantial changes.

In principle, governments can overcome these problems through policies to coordinate and coerce, but politicians are ever sensitive to imposing costs on their constituents. They avoid imposing taxes and costly regulations whenever possible. Innovation presents the great hope to solve problems at reduced cost. In the case of climate change, because of the collective action problems, government will have to lead the innovative investment.

Fortunately, the U.S. government has a track record of success with developing technologies to address another public good. Innovation is a hallmark of the U.S. military. The technology that U.S. soldiers, sailors, and airmen bring to war far outclasses adversaries’. Even as Americans complain about challenges of deploying new military equipment, always wishing that technical solutions could do more and would arrive faster to the field, they also take justifiable pride in the U.S. defense industry’s routine exploitation of technological opportunities. Perhaps that industry’s technology savvy could be harnessed to develop low-emissions technologies. And perhaps the Defense Department’s hefty purse could purchase enough to drive the innovations down the learning curve, so that they could then compete in commercial markets as low-cost solutions, too.

That potential has attracted considerable interest in defense-led energy innovation. In fact, in 2008, one of the first prominent proposals to use defense acquisition to reduce energy demand came from the Defense Science Board, a group of expert advisors to the Department of Defense (DOD) itself. The DSB reported, “By addressing its own fuel demand, DoD can serve as a stimulus for new energy efficiency technologies…. If DoD were to invest in technologies that improved efficiency at a level commensurate with the value of those technologies to its forces and warfighting capability, it would probably become a technology incubator and provide mature technologies to the market place for industry to adopt for commercial purposes.” Various think tanks took up the call from there, ranging from the CNA Corporation (which includes the Center for Naval Analyses) to the Pew Charitable Trusts’ Project on National Security, Energy and Climate. Ultimately, the then–Deputy Assistant to the President for Energy and Climate Change, Heather Zichal, proclaimed her hope for defense-led energy innovation on the White House blog in 2013.

These advocates hope not only to use the model of successful military innovation to stimulate innovation for green technologies but to actually use the machinery of defense acquisition to implement their plan. They particularly hope that the DOD will use its substantial procurement budget to pull the development of new energy technologies. Even when the defense budget faces cuts as the government tries to address its debt problem, other kinds of government discretionary investment are even more threatened, making defense ever more attractive to people who hope for new energy technologies.

The U.S. government has in part adopted this agenda. The DOD and Congress have created a series of high-profile positions that include an Assistant Secretary of Defense for Operational Energy Plans and Programs within the Pentagon’s acquisition component. No one in the DOD’s leadership wants to see DOD investment diverted from its primary purpose of providing for American national security, but the opportunity to address two important policy issues at the same time is very appealing.

The appeal of successful military innovation is seductive, but the military’s mixed experience with high-tech investment should restrain some of the exuberance about prospects for energy innovation. We know enough about why some large-scale military innovation has worked, while some has not, to predict which parts of the effort to encourage defense-led energy innovation are likely to be successful; enough to refine our expectations and target our investment strategies. This article carefully reviews the defense innovation process and its implications for major defense-led energy innovation.

Defense innovation works because of a particular relationship between the DOD and the defense industry that channels investment toward specific technology trajectories. Successes on “nice-to-have” trajectories, from DOD’s perspective, are rare, because the leadership’s real interest focuses on national security. Civilians are well aware of the national security and domestic political risks of even the appearance of distraction from core warfighting missions. When it is time to make hard choices, DOD leadership will emphasize performance parameters directly related to the military’s critical warfighting tasks, as essentially everyone agrees it should. Even in the relatively few cases in which investment to solve the challenges of the energy sector might directly contribute to the military component of the U.S. national security strategy, advocates will struggle to harness the defense acquisition apparatus. But a focused understanding of how that apparatus works will make their efforts more likely to succeed.

42

Jamey Stillings #26, 15 October 2010. Fine art archival print. Aerial view over the future site of the Ivanpah Solar Electric Generating System prior to full commencement of construction, Mojave Desert, CA, USA.

Jamey Stillings

Photographer Jamey Stillings’ fascination with the human-altered landscape and his concerns for environmental sustainability led him to document the development of the Ivanpah Solar Power Facility. Stillings took 18 helicopter flights to photograph the plant, from its groundbreaking in October 2010 through its official opening in February 2014. Located in the Mojave Desert of California, Ivanpah Solar is the world’s largest concentrated solar thermal power plant. It spans nearly 4,000 acres of public land and deploys 173,500 heliostats (347,000 mirrors) to focus the sun’s energy on three towers, creating 392 megawatts of electricity or enough to power 140,000 homes.

The photographs in this series formed the basis for Stillings’ current project, Changing Perspectives on Renewable Energy Development, an aerial and ground-based photographic examination of large-scale renewable energy initiatives in the American West and beyond.

Stillings’ three-decade career spans documentary, fine art, and commissioned projects. Based in Santa Fe, New Mexico, he holds an MFA in photography from Rochester Institute of Technology, New York. His work is in the collections of the Library of Congress, Washington, DC; the Museum of Fine Arts, Houston; and the Nevada Museum of Art, Reno, among others, and has been published in The New York Times Magazine, Smithsonian, and fotoMagazin. His second monograph, The Evolution of Ivanpah Solar, will be published in 2015 by Steidl.

—Alana Quinn

43

Jamey Stillings #4546, 28 July 2011. Fine art archival print. Aerial overview of Solar Field 1 before heliostat construction, looking northeast toward Primm, NV.

How weapons innovation has succeeded

Defense acquisition is organized by programs, the largest and most important of which are almost always focused on developing a weapons system, although sometimes the key innovations that lead to improved weapons performance come in a particular component. For example, a new aircraft may depend on a better jet engine or avionics suite, but the investment is usually organized as a project to develop a fighter rather than one or more key components. Sometimes the DOD buys major items of infrastructure such as a constellation of navigation satellites, but those systems’ performance metrics are usually closely tied to weapons’ performance; for example, navigation improves missile accuracy, essential for modern warfare’s emphasis on precision strike. Similarly, a major improvement in radar can come as part of a weapons system program built around that new technology, as the Navy’s Aegis battle management system incorporated the SPY-1 phased array radar on a new class of ships. To incorporate energy innovation into defense acquisition, the DOD and the military services would similarly add energy-related performance parameters to their programs, most of which are weapons system programs. The military’s focus links technology to missions. Each project relies on a system of complex interactions of military judgment, congressional politics, and defense industry technical skill.

44

Jamey Stillings #8704, 27 October 2012. Fine art archival print. Aerial view showing delineation of future solar fields around an existing geologic formation.

Defense innovation has worked best when customers—DOD and the military services—understand the technology trajectory that they are hoping to pull and when progress along that technology trajectory is important to the customer organization’s core mission. Under those circumstances, the customer protects the research effort, provides useful feedback during the process, adequately (or generously) funds the development, and happily buys the end product, often helping the developer appeal to elected leaders for funding. The alliance between the military customer and private firms selling the innovation can overcome the tendency to free ride that plagues investment in public goods such as defense and energy security.

Demand pull to develop major weapons systems is not the only way in which the United States has innovated for defense, but it is the principal route to substantial change. At best, other innovation dynamics, especially technology-push efforts that range from measured investments to support manufacturing scale-up to the Defense Advanced Research Project Agency’s drive for leap-ahead inventions, tend to yield small improvements in the performance of deployed systems in the military’s inventory. More often, because technological improvement itself is rarely sufficient to create demand, inventions derived from technology-push R&D struggle to find a home on a weapons system: Program offices, which actually buy products and thereby create the demand that justifies building production-scale factories, tend to feel that they would have funded the R&D themselves, if the invention were really needed to meet their performance requirements. Bolting on a new technology developed outside the program also can add technological risk—what if the integration does not work smoothly?—and program managers shun unnecessary risk. The partial exceptions are inventions such as stealth, where the military quickly connected the new technology to high-priority mission performance.

But most technology-push projects that succeed yield small-scale innovations that can matter a great deal at the level of local organizations but do not attract sufficient resources and political attention to change overall national capabilities. In energy innovation, an equivalent example would be a project to develop a small solar panel to contribute to electricity generation at a remote forward operating base, the sort of boon to warfighters that has attracted some attention during the Afghanistan War but that contributes to a relatively low-profile acquisition program (power generation as opposed to, say, a new expeditionary fighting vehicle) and will not even command the highest priority for that project’s program manager (who must remain focused on baseload power generation rather than solar augmentation).

In the more important cases of customer-driven military innovations, military customers are used to making investment decisions based on interests other than the pure profit motive. Defense acquisition requirements derive from leaders’ military judgment about the strategic situation, and the military gets the funding for needed research, development, and procurement from political leaders rather than profit-hungry investors. This process, along with the military’s relatively large purse as compared to even the biggest commercial customers, is precisely what attracts the interest of advocates of defense-led energy innovation: Because of the familiar externalities and collective action problems in the energy system, potential energy innovations often do not promise a rate of return sufficient to justify the financial risk of private R&D spending, but the people who make defense investments do not usually calculate financial rates of return anyway.

A few examples demonstrate the importance of customer preferences in military innovation. When the Navy first started its Fleet Ballistic Missile program, its Special Projects Office had concepts to give the Navy a role in the nuclear deterrence mission but not much money initially to develop and build the Polaris missiles. Lockheed understood that responsiveness was a key trait in the defense industry, so the company used its own funds initially to support development to the customer’s specifications. As a result, Lockheed won a franchise for the Navy’s strategic systems that continues today in Sunnyvale, California, more than 50 years later.

In contrast, at roughly the same time as Lockheed’s decision to emphasize responsiveness, the Curtiss-Wright Corporation, then a huge military aircraft company, attempted to use political channels and promises of great performance to sell its preferred jet engine design. However, Air Force buyers preferred the products of companies that followed the customer’s lead, and Curtiss-Wright fell from the ranks of leading contractors even in a time of robust defense spending. Today, after great effort and years in the wilderness, the company has rebuilt to the stature of a mid-tier defense supplier with a name recognized by most (but not all) defense industry insiders.

When it is time to make hard choices, DOD leadership will emphasize performance parameters directly related to the military’s critical warfighting tasks, as essentially everyone agrees it should.

46

Jamey Stillings #9712, 21 March 2013. Fine art archival print. Aerial view of installed heliostats.

The contrasting experiences of Lockheed and Curtiss-Wright show the crucial importance of following the customer’s lead in the U.S. defense market. Entrepreneurs can bring what they think are great ideas to the DOD, including ideas for great new energy technologies, but the department tends to put its money where it wants to, based on its own military judgment.

Although the U.S. military can be a difficult customer if the acquisition executives lose faith in a supplier’s responsiveness, the military can also be a forgiving customer if firms’ good-faith efforts do not yield products that live up to all of the initial hype, at least for programs that are important to the Services’ core missions. A technology occasionally underperforms to such an extent that a program is cancelled (for example, the ill-fated Sergeant York self-propelled antiaircraft gun of the 1980s) but in many cases, the military accepts equipment that does not meet its contractual performance specifications. The Services then either nurture the technology through years of improvements and upgrades or discover that the system is actually terrific despite failing to meet the “required” specs. The B-52 bomber is perhaps the paradigm case: It did not meet its key performance specifications for range, speed, or payload, but it turned out to be such a successful aircraft that it is still in use 50 years after its introduction and is expected to stay in the force for decades to come. The Army similarly stuck with the Bradley Infantry Fighting Vehicle through a difficult development history. Trying hard and staying friendly with the customer is the way to succeed as a defense supplier, and because the military is committed to seeking technological solutions to strategic problems, major defense contractors have many opportunities to innovate.

This pattern stands in marked contrast to private and municipal government investment in energy infrastructure, where underperformance in the short term can sour investors on an idea for decades. The investors may complete the pilot project, because municipal governments are not good at cutting their losses after the first phase of costs are sunk (though corporations may be more ruthless, for example in GM’s telling of the story of the EV-1 electric car). But almost no one else wants to risk repeating the experience, even if project managers can make a reasonable case that the follow-on project would perform better as a result of learning from the first effort.

And it’s the government—so politicians play a role

Of course, military desire for a new technology is not sufficient by itself to get a program funded in the United States. Strong political support from key legislators has also been a prerequisite for technological innovation. Over the years, the military and the defense contractors have learned to combine performance specifications with political logic. The best way to attract political support is to promise heroic feats of technological progress, because the new system should substantially outperform the equipment in the current American arsenal, even if that previous generation of equipment was only recently purchased at great expense. The political logic simply compounds the military’s tendency for technological optimism, creating tremendous technology pull.

In fact, Congress would not spend our tax dollars on the military without some political payoff, because national security poses a classic collective action problem. All citizens benefit from spending on national defense whether they help pay the cost or not, so the government spends tax dollars rather than inviting people to voluntarily contribute. But taxes are not popular, and raising money to provide public goods is a poor choice for a politician unless he can find a specific political benefit from the spending in addition to furthering the diffuse general interest.

Military innovations’ political appeal prevents the United States from underinvesting in technological opportunities. Sometimes that appeal comes from ideology, such as the “religion” that supports missile defense. Sometimes the appeal comes from an idiosyncratic vision: for example, a few politicians like Sen. John Warner contributed to keeping unmanned aerial vehicle (UAV) programs alive before 9/11, before the War on Terror made drone strikes popular. And sometimes the appeal comes from the ability to feed defense dollars to companies in a legislator’s district. In the UAV case, Rep. Norm Dicks, who had many Boeing employees in his Washington State district, led political efforts to continue funding UAV programs after the end of the Cold War.

47

Jamey Stillings #7626, 4 June 2012. Fine art archival print. Workers install a heliostat on a pylon. Background shows assembled heliostats in “safe” or horizontal mode. Mirrors reflect the nearby mountains.

This need for political appeal presents a major challenge to advocates of defense-led energy innovation, because the political consensus for energy innovation is much weaker than the one for military innovation. Some prominent political leaders, notably Sen. John McCain, have very publicly questioned whether it is appropriate for the DOD to pay attention to energy innovation, which they view as a distraction from the DOD’s primary interest in improved warfighting performance. McCain wrote a letter to the Secretary of the Navy, Ray Mabus, in July 2012, criticizing the Navy’s biofuels initiative by pointedly reminding Secretary Mabus, “You are the Secretary of the Navy, not the Secretary of Energy.” Moreover, although almost all Americans agree that the extreme performance of innovative weapons systems is a good thing (Americans expect to fight with the very best equipment), government support for energy innovation, especially energy innovation intended to reduce greenhouse gas emissions, faces strong political headwinds. In some quarters, ideological opposition to policies intended to reduce climate change is as strong as the historically important ideological support for military investment in areas like missile defense.

48

Jamey Stillings #10995, 4 September 2013. Fine art archival print. Solar flux testing, Solar Field 1.

The defense industry also provides a key link in assembling the political support for military innovation that may be hard to replicate for defense-led energy innovation. The prime contractors take charge of directly organizing district-level political support for the defense acquisition budget. To be funded, a major defense acquisition project needs to fit into a contractor-led political strategy. The prime contractors, as part of their standard responsiveness to their military customers, almost instantly develop a new set of briefing slides to tout how their products will play an essential role in executing whatever new strategic concept or buzzword comes from the Pentagon. And their lobbyists will make sure that all of the right congressional members and staffers see those slides. But those trusted relationships are built on understanding defense technology and on connections to politicians interested in defense rather than in energy. There may be limits to the defense lobbyists’ ability to redeploy as supporters of energy innovation.

49

Jamey Stillings #7738, 4 June 2012. Fine art archival print. View of construction of the dry cooling system of Solar Field 1.

Other unusual features of the defense market reinforce the especially strong and insular relationship between military customers and established suppliers. Their relationship is freighted with strategic jargon and security classification. Military suppliers are able to translate the language in which the military describes its vision of future combat into technical requirements for systems engineering, and the military trusts them to temper optimistic hopes with technological realism without undercutting the military’s key objectives. Military leaders feel relatively comfortable informally discussing their half-baked ideas about the future of warfare with established firms, ideas that can flower into viable innovations as the military officers go back and forth with company technologists and financial officers. That iterative process has given the U.S. military the best equipment in the world in the past, but it tends to limit the pool of companies to the usual prime contractors: Lockheed Martin, Boeing, Northrop Grumman, Raytheon, General Dynamics, and BAe Systems. Those companies’ core competency is in dealing with the unique features of the military customer.

Jargon and trust are not the only important features of that customer-supplier relationship. Acquisition regulations also specify high levels of domestic content in defense products, regardless of the cost; that a certain fraction of each product will be built by small businesses and minority- and women-owned companies, regardless of their ability to win subcontracts in fair and open competition; and that defense contractors will comply with an extremely intrusive and costly set of audit procedures to address the threat of perceived or very occasionally real malfeasance. These features of the defense market cannot be wished away by reformers intent on reducing costs: Each part of the acquisition system has its defenders, who think that the social goal or protection from scandal is worth the cost. The defense market differs from the broader commercial market in the United States on purpose, not by chance. Majorities think that the differences are driven by good reasons.

The implication is that the military has to work with companies that are comfortable with the terms and conditions of working for the government. That constraint limits the pool of potential defense-led energy innovators. It would also hamper the ability to transfer any defense-led energy innovations to the commercial market, because successful military innovations have special design features and extra costs built into their value chain.

In addition to their core competency in understanding the military customer, defense firms, like most other companies, also have technological core competencies. In the 1990s and 2000s, it was fashionable in some circles to call the prime contractors’ core competency “systems integration,” as if that task could be performed entirely independently from a particular domain of technological expertise. In one of the more extreme examples, Raytheon won the contract as systems integrator for the LPD-17 class of amphibious ships, despite its lack of experience as a shipbuilder. Although Raytheon had for years led programs to develop highly sophisticated shipboard electronics systems, the company’s efforts to lead the team building the entire ship contributed to an extremely troubled program. In this example, company and customer both got carried away with their technological optimism and their emphasis on contractor responsiveness. In reality, the customer-supplier relationship works best when it calls for the company to develop innovative products that follow an established trajectory of technological performance, where the supplier has experience and core technical capability. Defense companies are likely to struggle if they try to contribute to technological trajectories related to energy efficiency or reduced greenhouse gas emissions, trajectories that have not previously been important in defense acquisition.

50

Jamey Stillings #11060, 4 September 2013. Fine art archival print. View north of Solar Fields 2 and 3.

That is not to say that the military cannot introduce new technological trajectories into its acquisition plans. In fact, the military’s emphasis on its technological edge has explicitly called for disruptive innovation from time to time, and the defense industry has responded. For example, the electronics revolution involved huge changes in technology, shifting from mechanical to electrical devices and from analog to digital logic, requiring support from companies with very different technical core competencies. Startup companies defined by their intellectual property, though, had little insight (or desire) to figure out the complex world of defense contracting—the military jargon, the trusted relationships, the bureaucratic red tape, and the political byways—so they partnered with established prime contractors. Disruptive innovators became subcontractors, formed joint ventures, or sold themselves to the primes. The trick is for established primes to serve as interfaces and brokers to link the military’s demand pull with the right entrepreneurial companies with skills and processes for the new performance metrics. Recently, some traditional aerospace prime contractors, led by Boeing and Northrop Grumman, have used this approach to compete in the market for unmanned aerial vehicles. Perhaps they could do the same in the area of energy innovation.

What the military customer wants

Given the pattern of customer-driven innovation in defense, the task confronting advocates of defense-driven energy innovation seems relatively simple: Inject energy concerns into the military requirements process. If they succeed, then the military innovation route might directly address key barriers that hamper the normal commercial process of developing energy technologies. With the military’s interest, energy innovations might find markets that promise a high enough rate of return to justify the investment, and energy companies might convince financiers to stick with projects through many lean years and false starts before they reach technological maturity, commercial acceptance, and sufficient scale to earn profits.

The first step is to understand the customers’ priorities. From the perspective of firms that actually develop and sell new defense technologies, potential customers include the military services with their various components, each with a somewhat different level of interest in energy innovation.

Military organizations decide the emphasis in the acquisition budget. They make the case, ideally based on military professional judgment, for the kinds of equipment the military needs most. They also determine the systems’ more detailed requirements, such as the speed needed by a front-line fighter aircraft and the type(s) of fuel that aircraft should use. They may, of course, turn out to be wrong: Strategic threats may suddenly change, some technological advantages may not have the operational benefits that military leaders expected, or other problems could emerge in their forecasts or judgments. Nevertheless, these judgments are extremely influential in defining acquisition requirements. Admitting uncertainty about requirements often delays a program: Projects that address a “known” strategic need get higher priority from military leaders and justify congressional spending more easily.

Not surprisingly, military buyers almost always want a lot of things. When they set the initial requirements, before the budget and technological constraints of actual program execution, the list of specifications can grow very long. Even though the process in principle recognizes the need for tradeoffs, there is little to force hard choices early in the development of a new military technology. Adding an energy-related requirement would not dramatically change the length of the list. But when the real spending starts and programs come up for evaluation milestones, the Services inevitably need to drop some of the features that they genuinely desired. Relevance to the organizations’ critical tasks ultimately determines the emphasis placed on different performance standards during those difficult decisions. Even performance parameters that formally cannot be waived, like those specified in statute, may face informal pressure for weak enforcement. Programs can sometimes get a “Gentleman’s C” that allows them to proceed, subordinating a goal that the buying organization thinks is less important.

Energy technology policy advocates looking for a wealthy investor to transform the global economy probably ask too much of the DOD.

For example, concerns about affordability and interoperability with allies’ systems have traditionally received much more rhetorical emphasis early in programs’ lives than actual emphasis in program execution. When faced with the question of whether to put the marginal dollar into making the F-22 stealthy and fast or into giving the F-22 extensive capability to communicate, especially with allies, the program office not surprisingly emphasized the former key performance parameters rather than the latter nice feature.

Given that military leaders naturally emphasize performance that responds directly to strategic threats, and that they are simultaneously being encouraged by budget austerity to raise the relative importance of affordability in defense acquisition decisions, energy performance seems more likely to end up like interoperability than like stealth in the coming tradeoff deliberations. In a few cases, the energy-related improvements will directly improve combat performance or affordability, too, but those true “win-win” solutions are not very common. If they were, there would be no appeals for special priority for energy innovation.

The recent case of the ADVENT jet engine program shows the difficulty. As the military begins procurement of the F-35 fighter for the Air Force, Navy, and Marine Corps as well as for international sales, everyone agrees that having two options for the engine would be nice. If Pratt & Whitney’s F-135 engine runs into unexpected production or operational problems, a second engine would be available as a backup, and competition between the two engines would presumably help control costs and might stimulate further product improvement. However, the military decided that the fixed cost of paying GE to develop and manufacture a second engine would be too high to be justified even for a market as enormous as the F-35 program. The unofficial political compromise was to start a public-private partnership with GE and Rolls Royce called ADVENT, which would develop the next generation of fighter engine that might compete to get onto F-35 deliveries after 2020. ADVENT’s headline target for performance improvement is a 25% reduction in specific fuel consumption, which would reduce operating costs and, more important, would increase the F-35’s range and its ability to loiter over targets, directly contributing to its warfighting capabilities, especially in the Pacific theater, where distances between bases and potential targets are long. Although this increase in capability seems particularly sensible, given the announced U.S. strategy of “rebalancing” our military toward Asia, the Air Force has struggled to come up with its share of funding for the public/private partnership and has hesitated to prepare for a post-2020 competition between the new engine and the now-established F-135. The Air Force may have enough to worry about trying to get the first engine through test and evaluation, and paying the fixed costs of a future competitor still seems like a luxury in a time of budget constraint. Countless potential energy innovations have much weaker strategic logic than the ADVENT engine, and if ADVENT has trouble finding a receptive buyer, the others are likely to have much more trouble.

Of course, military culture also offers some hopeful points for the energy innovation agenda. For example, even if energy innovation adds complexity to military logistics in managing a mix of biofuels, or generating and storing distributed power rather than using standardized large-capacity diesel generators, the military is actually good at dealing with complexity. The Army has always moved tons of consumables and countless spare parts to the front to feed a vast organization of many different communities (infantry, armor, artillery, aviation, etc.). The Navy’s power projection capability is built on a combination of carefully planning what ships need to take with them with flexible purchasing overseas and underway replenishment. The old saw that the Army would rather plan than fight may be an exaggeration, but it holds more than a grain of truth, because the Army is genuinely good at planning. More than most organizations, the U.S. military is well prepared to deal with the complexity that energy innovation and field experimentation will inject into its routines. Even if the logistics system seems Byzantine and inefficient, the military’s organizational culture does not have antibodies against the complexity that energy innovation might bring.

52

Jamey Stillings #11590, 5 September 2013. Fine art archival print. Solar flux testing, Solar Field 3.

Who will support military-led innovation?

The potential for linking energy innovation to the DOD’s core mission seems especially important and exciting right now, because of the recent experience at war, and even more than that, because the recent wars happen to have involved a type of fighting with troops deployed to isolated outposts far from their home bases, in an extreme geography that stressed the logistics system. But as the U.S. effort in Afghanistan draws down, energy consumption in operations will account for less of total energy consumption, meaning that operational energy innovations will have less effect on energy security. More important, operational energy innovations will be of less interest to the military customers, who according to the 2012 Strategic Guidance are not planning for a repeat of such an extreme situation as the war in Afghanistan. Even if reality belies their expectations (after all, they did not expect to deploy to Afghanistan in 2001, either) acquisition investments follow the ex ante plans, not the ex post reality.

Specific military organizations that have an interest in preparing to fight with a light footprint in austere conditions may well continue the operational energy emphasis of the past few years. The good news for advocates of military demand pull for energy innovation is that special operations forces are viewed as the heroes of the recent wars, making them politically popular. They also have their own budget lines that are less likely to be swallowed by more prosaic needs such as paying for infrastructure at a time of declining defense budgets. While the conventional military’s attention moves to preparation against a rising near-peer competitor in China (a possible future, if not the only one, for U.S. strategic planning), special operations may still want lightweight powerful batteries and solar panels to bring power far off the grid. Even if a lot of special operations procurement buys custom jobs for highly unusual missions, the underlying research to make special operations equipment may also contribute to wider commercial uses such as electric cars and distributed electricity generation, if not to other challenges like infrastructure-scale energy storage and grid integration of small-scale generators.

53

Jamey Stillings #9395, 21 March 2013. Fine art archival print. Sunrise, view to the southeast of Solar Fields 3, 2, and1.

Working with industry for defense-led energy innovation requires treading a fine line. Advocates need to understand the critical tasks facing specific military organizations, meaning that they have to live in the world of military jargon, strategic thinking, and budget politics. At the same time, the advocates need to be able to reach nontraditional suppliers who have no interest in military culture but are developing technologies that follow performance trajectories totally different from those of the established military systems. More likely, it will not be the advocates who will develop the knowledge to bridge the two groups, their understandings of their critical tasks, and the ways they communicate and contract. It will be the DOD’s prime contractors, if their military customers want them to respond to a demand for energy innovation.

Defense really does need some new energy technologies, ranging from fuel-efficient jet engines to easily rechargeable lightweight batteries, and the DOD is likely to find some money for particular technologies. Those technologies may also make a difference for the broader energy economy. But energy technology policy advocates looking for a wealthy investor to transform the global economy probably ask too much of the DOD. Military innovations that turn out to have huge commercial implications—innovations such as the Internet and the Global Positioning System—do not come along very often, and it takes decades before their civilian relatives are well understood and widely available. The military develops these products because of its own internal needs, driven by military judgment, congressional budget politics, and the core competencies of defense-oriented industry.

In a 2014 report, the Pew Project on National Security, Energy and Climate Change blithely discussed the need to “chang[e] the [military] culture surrounding how energy is generated and used….” Trying to change the way the military works drives into the teeth of military and political resistance to defense-led energy innovation. Changing the culture might also undermine the DOD’s ability to innovate; after all, one of the key reasons why Pew and others are interested in using the defense acquisition apparatus for energy innovation is that mission-focused technology development at the DOD has been so successful in the past. Better to focus defense-led energy innovation efforts on projects that actually align with military missions rather than stretching the boundaries of the concept and weakening the overall effort.

Recommended reading

Thomas P. Erhard, Air Force UAVs: The Secret History (Arlington, VA: Mitchell Institute for Airpower Studies, July 2010).

Eugene Gholz, “Eisenhower versus the Spinoff Story: Did the Rise of the Military-Industrial Complex Hurt or Help America’s Commercial Competitiveness?” Enterprise and Society 12, no. 1 (March 2011).

Dwight R. Lee, “Public Goods, Politics, and Two Cheers for the Military-Industrial Complex,” in Robert Higgs, ed., Arms, Politics, and the Economy: Historical and Contemporary Perspectives (New York, NY: Holmes & Meier, 1990), pp. 22–36.

Thomas L. McNaugher, New Weapons, Old Politics: America’s Military Procurement Muddle (Washington, DC: Brookings Institution, 1989).

David C. Mowery, “Defense-related R&D as a model for ‘Grand Challenges’ technology policies,” Research Policy 41, no. 10 (December 2012).

Report of the Defense Science Board Task Force on DoD Energy Strategy: “More Fight–Less Fuel” (Washington, DC: Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics, February 2008).

Harvey M. Sapolsky, Eugene Gholz, and Caitlin Talmadge, US Defense Politics: The Origins of Security Policy (London, UK: Routledge, Revised and Expanded 2nd edition, 2013).

Eugene Gholz ([email protected]) is an associate professor at the LBJ School of Public Affairs of The University of Texas at Austin.

Retire to Boost Research Productivity!

University leaders confront multiple challenges with an aging faculty. Writing in Inside Higher Ed in 2011, longtime education reporter Dan Berrett spotlighted the “Gray Wave” of a growing number of faculty members 60 years of age or older (think baby boomers and increasing lifespans) holding tightly onto their positions, shielded by the lack of mandatory retirement. Many of them have the ability and desire to continue their scholarly work, and they fear multiple losses attendant to retirement. But as they hang on, younger people may be kept off the academic ladder. Might there be “win-win” semi-retirement options to enable faculty to remain productive and engaged, while opening opportunities for new generations?

The answer may be yes, based on one case study—my own. I retired as active faculty in December 2001, at the (rather young) age of 56. I had been jointly appointed as a professor in industrial and systems engineering and in public policy at the Georgia Institute of Technology. Upon my retirement, Georgia Tech indicated that I needed to pick one school to reduce administrative overhead for an emeritus faculty member, so I’m now an emeritus professor, and part-time researcher, in public policy.

Since retirement, my research productivity has escalated. Amused colleagues have kidded that the secret to boosting research output is to retire and that I ought to share this tale. So, I offer this “N = 1 case study” to stimulate thinking about retiree research and to raise some intriguing faculty policy issues.

What retirement did to my research publication activity is captured in Table 1. It compares two five-year post-retirement periods with corresponding pre-retirement periods. The data resulted from searching in Web of Science, skipping my first year of retirement (2002) as ambiguous and leaving out one year in the middle of the overall period, just to facilitate comparison. I also left out four papers published after retirement that reflect research conducted at a small company I joined, to make this a tighter academic “before versus after.”

The data show a sharp increase in research publication. This same phenomenon appeared in examining only my journal articles (the table includes all publications). For this subgroup, there are 14 for 1991–2001 versus 42 for 20032013. Aha—retire and publication productivity triples!

One alternative hypothesis to explain this increased productivity is that I’m a slow learner and that my research has been trending upward throughout the pre- and post-retirement periods. The data don’t conflict with that. (So maybe the elixir is simply aging?)

Citations accrued by the papers (also gathered from Web of Science) provide an additional, if again imperfect, measure of research value. The tally of cites to the 1991–2001 papers is 254 versus 727 for the 2003–2013 papers: another tripling up. And cites per year, based on the average number of publications per year by period, shows a jump from 14 before retirement to 121 after.

Behind the numbers

We can argue over which statistics are most meaningful, but what they all show is that my research productivity has gone up. But why? And so what?

Several factors seem to have contributed to the rise. Although cute, and the stimulus for this reflection, “retire to boost productivity” does not convey enough information to account well for the gain. Let’s scan some additional factors worthy of consideration.

To begin, my teaching load before retirement was moderate, averaging two courses per semester, or four per year. Since shortly after retirement, and with the end of teaching, I’ve reduced my workday by roughly 20%. I now spend roughly half of my work time at Georgia Tech, with essentially no teaching duties and much-reduced administrative chores. But the other half of my work time is now devoted to my role as director of R&D for Search Technology Inc., based in Norcross, Georgia. So more time than before is devoted to my role in the business. My colleagues at the company provide invaluable technical support for the text analyses that underlie most of my research, in which I use VantagePoint software to analyze sets of R&D abstract records. Balancing it all out, I’d guesstimate that under the current arrangements, my weekly hours devoted to research increased post-retirement, but not drastically, from 15 before to 20 after.

TABLE 1

23

The disparity between detailed policies and procedures for the active faculty and the dearth thereof for retired faculty warrants protest and action.

How about university roles in supporting retiree research? Georgia Tech allows me to continue to conduct research and provides essential research infrastructure. Post-retirement, I continued to advise two Ph.D. students through graduation. I cannot advise new ones, although I do serve on Ph.D. dissertation committees and support research assistants from project funding.

I am a technology watcher. My research focuses on science, technology, and innovation intelligence, forecasting, and assessment, so I don’t need laboratory facilities. Shared workspace for graduate students and visiting researchers is a requisite, I’d say. I am usually on campus once weekly for meetings but don’t much use a shared workspace myself. Onsite and remote-access library resources (especially databases such as Web of Science) are essential for my bibliometric analyses.

Georgia Tech provides an institutional base for me to be principal investigator (PI) or participant on funded research (paid on an hourly basis up to a halftime threshold). It also provides regular administrative support for management of my funded research (and charges projects the regular overhead rates, but my fringe benefit rate is very low, as a retiree).

My research gains enormously from ongoing collaboration in Georgia Tech research activities, including through the Program for Science, Technology & Innovation Policy, where I participate in weekly meetings, and through ties with the Technology Policy & Assessment Center. Such access to intellectual stimulus, interchange of ideas, and energetic graduate students eager to do the heavy lifting are, in my view, the major drivers of my observed research productivity gains. (My 14 pre-retirement articles included in this analysis averaged 3.2 authors; the 42 post-retirement ones averaged 3.8.) These arrangements counter the potential isolation of retirement.

Tellingly, the National Science Foundation (NSF) accepts proposals from me as PI or participant, with Georgia Tech or Search Technology providing institutional bases. An NSF Center for Nanotechnology in Society award to Arizona State University has supported Georgia Tech through a subcontract to generate and maintain a substantial “nano” publications and patents data set. This has provided key data resources for a series of analyses and resulting papers—at least 17 since 2008—and has been a major factor in my productivity.

NSF also made a Science of Science & Innovation Policy award to Georgia Tech, with me as PI. Ultimately, some of the work proposed under the award did not take place, but NSF allowed us to reallocate the funds to make small targeted sub-awards intended to generate project-related research in critical areas. I am convinced that this flexible support helped boost research collaboration.

There is also an international component to our work. Building on a 20-year collaboration with Donghua Zhu, a professor of management science and engineering at Beijing Institute of Technology, a string of Ph.D. students from his lab, with funding from China, have spent a year at Georgia Tech. I believe both sides gain as the students work on our projects and learn our approaches to science, technology, and innovation analyses to initiate research pointing toward their dissertations. In 2008–2009, two such students, Ying Guo and Lu Huang, became the model for productive research collaboration, deriving from their initiative, English skills, solid analytical background, and research interests that meshed very well with colleagues at our Program for Science, Technology & Innovation Policy. I have continued to collaborate with Ying and Lu since they returned to their Beijing institution and moved into faculty positions, and these efforts have resulted in nine coauthored papers published between 2011 and 2013—more than with any other colleagues in that period. Active collaboration also continues with their successors who visited Georgia Tech. This international exchange has thus been a huge post-retirement boost to my research collaboration and productivity.

Beyond N = 1

What about evidence beyond N = 1? A modest contingent of scholars studies retirees, devoting attention to many facets, such as work, leisure, health, university access, and research activity. I’ll borrow a bit from several of them—with great, if indirectly acknowledged, thanks—in considering the various factors that contribute to research productivity and policy issues.

How many retirees continue their scholarly research? I made a casual sampling of five retired faculty members from each of five organizations: the MIT Sloan School and Department of Mechanical Engineering, the Georgia Tech Departments of Chemical & BioEngineering and Physics, and the Stanford University School of Engineering. A search in the Web of Science for a recent 1.5-year period turned up publications by 20% of them. More broadly, estimates in the literature suggest that up to about half of recently retired faculty remain active in research, teaching, or both. Perhaps not surprisingly, retired faculty tend to be more engaged in academic activities for the first 10 years or so after retirement, tailing off after that.

Here are factors that I believe affect retiree research opportunities:

Areas for exploration

So what policy options does my case of post-retirement research boosterism and a reading of the literature raise for university administrators? Here I identify five areas for further exploration:

1. The word “retirement” conveys the idea of ceasing one’s prior work activity. Should universities allow retirees to continue research? If so, how so? Can they advise Ph.D. students, serve as PIs on grants, maintain lab facilities? I think the answers should generally be “yes.” But some institutions still favor “clear out your desk” retirement.

2. University administrators should consider formally and clearly establishing policies for supporting retiree research. Appointing faculty committees to examine the issues may prove valuable here. Among the questions to be considered: Does the university provide an institutional base for ongoing retiree research? If so, what is provided across the university and what through individual units? With what conditions and restrictions, and for whom? And for how long?

3. Central administration and accounting units should address issues associated with the attendant costs and benefits of having retirees continue to conduct research. For example, who will pay for a retiree’s computer support, and who will accrue overhead on grants received? On the flip side, my case suggests that facilitating retiree research can provide highly favorable benefit/cost ratios. Universities would be well advised to crunch the numbers thoroughly and pay heed to the results.

4. There may be a critical divide between retired faculty who will need physical facilities, such as lab space and equipment, and those who don’t. But even as universities may find it easier to accommodate the needs of faculty who don’t need lots of infrastructure, at least as policies begin to unfold, they can continually look for ways to provide support elements—I prefer not to consider them “privileges”—to make it easier for both camps to remain engaged.

5. Whatever retirement research policies are determined, universities really need to communicate them to everyone, including those faculty considering retirement (early or otherwise).

A key mission of universities is to generate new knowledge. Enabling a great human capital resource—retired faculty and staff—to contribute to that mission seems wise. Not doing so strikes me as wrongheaded. And other faculty appear to agree, because surveys find a significant number of retired faculty lamenting restrictions on their access to university resources needed to continue their scholarship. Rising life expectancies may only amplify the interest and the payoffs for universities, for society, and fundamentally for retirees who still find great life fulfillment through continuing their scholarly pursuits.

Aiding “good luck”

Given the potential rewards from retiree research, what support should universities provide? Returning to my personal experiences, fostering ongoing collegial interaction seems paramount, especially staying connected with potential collaborators. My case touches on several means to enhance collaboration, including having international graduate students visit for a year during the course of their studies. Georgia Tech has been supportive of that by moving to establish policies on background and language proficiency checks and providing support in obtaining visas, among other helpful measures. Ongoing interaction with grad students can benefit both them and the retired professor.

Academic research relies on funding. In my case, NSF is the main supporter, so I’m very appreciative that competition for funding is open to retired faculty. Policy options span a gamut. One possibility to consider would be set-asides for retired faculty support, perhaps small grants within programs to support conference presentations, travel grants, or whatever. Or special funding could be designated to facilitate collaboration between retired and active faculty at different universities. A variant would be to support emeritus faculty who mentor or collaborate (or both) with junior faculty. Drawing on my experiences, the provision of modest support to encourage visiting Ph.D. students spending a year with a retired faculty member as mentor can pay off nicely for both. At the opposite extreme, funders could preclude retired faculty from acting as PIs (but I hope they don’t).

Beyond specific actions, an overarching message is that the future should not be left to chance. My tale contains a happy confluence of factors that has brought me much satisfaction, enabled active research, and returned value to my university (and to the taxpayers who ultimately provided the federal funding dollars). I lucked out; my choices (especially early retirement) were made pretty casually, without careful consideration of ongoing research means and ends. Better for universities to spell out options so that faculty can plan wisely, and I think those options should be weighted to encourage “active retirement.”

More attention should also be paid to faculty and staff retirement issues writ broadly, reaching beyond the research environment. Literature addressing faculty retirement finds a lamentable lack of information for, fairness toward, and sensibility about, faculty retirees who want to stay involved. Much could be offered to make retirement more attractive at modest cost. The disparity between detailed policies and procedures for the active faculty and the dearth thereof for retired faculty warrants protest and action.

A major concern for universities and the research enterprise more broadly is to expand opportunities for young Ph.D.s for research and full faculty positions. Although, as I’ve suggested, the issue certainly requires more exploration and discussion, one obvious way to create those opportunities is to make semi-retirement attractive and rewarding for the graying faculty. Encourage us to retire! Our productivity may even go up, as we take advantage of greater flexibility in pursuing not just our research but life satisfaction, while making more room for faculty positions for younger generations.

The True Grand Challenge for Engineering: Self-Knowledge

In 2003, the National Academy of Engineering (NAE) published A Century of Innovation celebrating “20 engineering achievements that transformed our lives” across the 20th century, from automobiles to the Internet. Five years later, it followed up with 14 Grand Challenges for engineering in the 21st century, including making solar energy affordable, providing energy from fusion, securing cyberspace, and enhancing virtual reality. But only the most cursory mention was made of the greatest challenge of all: cultivating deeper and more critical thinking, among engineers and nonengineers alike, about the ways engineering is transforming how and why we live.

What Percy Bysshe Shelley said about poets two centuries ago applies even more to engineers today: They are the unacknowledged legislators of the world. By designing and constructing new structures, processes, and products, they are influencing how we live as much as any laws enacted by politicians. Would we ever think it appropriate for legislators to pass laws that could transform our lives without critically reflecting on and assessing those laws? Yet neither engineers nor politicians deliberate seriously on the role of engineering in transforming our world. Instead, they limit themselves to celebratory clichés about economic benefit, national defense, and innovation.

Where might we begin to promote more critical reflection in our engineered lives? One natural site would be engineering education. In this respect, it is again revealing to note the role of the NAE Grand Challenges. Not just in the United States, but globally as well, the technical community is concerned about the image of engineering in the public sphere and its limited attractiveness to students. The 2010 United Nations Educational, Scientific and Cultural Organization study Engineering: Issues, Challenges and Opportunities for Development lamented that despite a “growing need for multi-talented engineers, the interest in engineering among young people is waning in so many countries.” The Grand Challenges have thus been deployed in the Grand Challenges Scholars Program as a way to attract more students to the innovative life. But to adapt the title of Vannevar Bush’s Science Is Not Enough, a cultivated enthusiasm for engineering is insufficient. More pointedly, to paraphrase Socrates, “The unexamined engineering life is not worth living.” More than once in dialogue with Greek fellow citizens who boasted of their prowess in meeting challenges, Socrates referenced the words inscribed on the Temple of Apollo at Delphi: Know thyself. It is a motto that engineers—and all of us whose lives are informed by engineering—could well apply to ourselves.

An axial age

In a critical reflection on world history, the German philosopher Karl Jaspers observed how in the first millennium BCE, human cultures in Asia and Europe independently underwent a profound transformation that he named the Axial Age. Thinkers as diverse as Confucius, Laozi, Buddha, Socrates, and the Hebrew prophets began to ask what it means to be human. Humans no longer simply accepted whatever ways of life they were born into; they began to subject their cultures to critical assessment. Today we are entering a new Axial Age, one in which we no longer simply accept the physical world into which we are born. But engineering makes almost no effort to give engineers—or any of the rest of us—the tools to reflect on themselves and their world-transforming enterprise.

Engineering programs like to promote innovation in product creation, and to some extent in pedagogy, yet almost never in critical thinking about what it means to be an engineer. Surely the time has come for engineering schools to become more than glorified trade schools whose graduates can make more money than the hapless English majors whom Garrison Keillor lampoons on A Prairie Home Companion. How about engineers who can think holistically and critically about their own role in making our world and assist their nonengineering fellow citizens as well in thinking that goes beyond superficial promotions of the new? And where might engineers acquire some tools with which to cultivate such abilities? One place to start would be through engagement with the traditions of thought and critical self-reflection that emerged from the original Axial Age: what we now call the humanities.

Two cultures recidivus

To mention engineering and the humanities in the same sentence immediately calls to mind C. P. Snow’s famous criticism of those “natural Luddites” who do not have the foggiest notion about such technical basics as the second law of thermodynamics. Do historians, literary scholars, and philosophers really know anything that can benefit engineers?

Snow’s “two cultures” argument, as well as many discussions since, conflates science and engineering. The powers often attributed to science, such as the ability to overcome poverty through increased production of goods and to send people to the Moon by spaceship construction, belong more to engineering. As a result, there are actually two two-culture issues. The tension between two forms of knowledge production (sciences and the humanities) is arguably less significant than another between designing and constructing the world versus reflecting on what it means (engineering and the humanities).

Indeed, although there is certainly room for improvement on the humanities side, I venture that a majority of humanities teachers in engineering schools today could pass the test Snow proposed to the literary intellectuals he skewered. Yet in my experience relatively few engineers, when invited to reflect on their professions, can do much more than echo libertarian appeals to the need for unfettered innovation to fuel endless growth. Even the more sophisticated commentators on engineering such as Samuel Florman (The Existential Pleasures of Engineering), Henry Petroski (To Engineer Is Human), and Billy Vaughn Koen (Discussion of the Method: Conducting the Engineer’s Approach to Problem Solving) are largely absent from engineering curricula.

The two-cultures problem in engineering schools is distinctive. It concerns how to infuse into engineering curricula the progressive humanities and qualitative social sciences, as pursued by literary intellectuals who strive to make common cause with that minority of engineers who are themselves critical of the cultural captivity of techno-education. There are, for instance, increasing efforts to develop programs in humanitarian engineering, service learning, and social justice. Nevertheless, having taught in three engineering schools, I—like many humanities scholars who teach engineering students—experience a continuing tension between engineering and the humanities. Such is especially the case today, in an increasingly corporatized environment at an institution oriented toward the efficient throughput of students who can serve as handmaids of an expanding energy industry.

On the one side, engineering faculty (administrators even more so) have a tendency to look on humanities courses as justified only insofar as they provide communication skills. They want to know the cash value of humanities courses for professional success. The engineering curriculum is so full that they feel compelled to limit humanities and social science requirements, commonly to little more than a semester’s worth, spread over an eight-semester degree program crammed with science and engineering.

Unlike professional degrees in medicine or law, which typically require a bachelor’s degree of some sort before professional focus, entry into engineering is via the B.S. degree alone. This has undoubtedly been one feature attracting many students who are the first members of their families to attend college. It is an upward-mobility degree, even if there is not quite the demand for engineers that the engineering community often proclaims.

Why humanities?

On the other side, humanities faculty (there are seldom humanities administrators with any influence in engineering schools) struggle to justify their courses. These justifications are of three unequal types, taking an instrumental, enhanced instrumental, and intrinsic-value approach.

The first, default appeal is to the instrumental value of communication skills. Engineers who cannot write or otherwise communicate their work are at a disadvantage, not only in abilities to garner respect from people outside the engineering community but even within technical work teams. The humanities role in teaching critical thinking is an expanded version of this appeal. All engineers need to be critical thinkers when analyzing and proposing design solutions to technical problems. But why no critical thinking about the continuous push for innovation itself? Too often, the humanities are simply marshalled to provide rhetorical skills for jumping aboard the more-is-better innovation bandwagon—or criticized for failing to do so.

A second, enhanced instrumental appeal stresses how humanities knowledge, broadly construed to include the qualitative social sciences, can help engineers manage alleged irrational resistance to technological innovation from the nonengineering world. This enhanced instrumental appeal argues that courses in history, political science, sociology, anthropology, psychology, and geography—perhaps even in literature, philosophy, and religion—can locate engineering work in its broader social context. Increasingly engineers recognize that their work takes place in diverse sociocultural situations that need to be negotiated if engineering projects are to succeed.

In similar ways, engineering practice can itself be conceived as a techno-culture all its own. The interdisciplinary field of science, technology, and society (STS) studies receives special recognition here. Many interdisciplinary STS programs arose inside engineering schools, and even after their transformation to disciplinary science and technology studies, some departments have remained closely connected to engineering faculties.

The enhanced instrumental appeal further satisfies ABET (the new acronym name for what used to be the Accreditation Board for Engineering and Technology) requirements. In order to be ABET-accredited, engineering programs must be structured around 11 student outcomes. Central to these outcomes are appropriate mastery of technical knowledge in mathematics and the sciences, including the engineering sciences, and the practices of engineering design, including abilities “to identify, formulate, and solve engineering problems” and “to function on multidisciplinary teams.” Engineers further need to learn how to design products, processes, and systems “to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability” and possess “the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context.” Finally, engineering students should be taught “an ability to communicate effectively” and “professional and ethical responsibility.” Clearly the humanities need to be enrolled in the process of delivering the more fuzzy of these outcomes.

The challenge of professional ethical responsibility deserves highlighting. It is remarkable how, although professional engineering codes of ethics identify the promotion of public safety, health, and welfare as primary obligations, the engineering curriculum shortchanges these key concepts. There exists a field termed safety engineering but none called health or welfare engineering. And even if there were, because the promotion of these values is an obligation for all engineers, their examination would need to be infused across the curriculum. Physicians, who also have a professional commitment to the promotion of health, have to deal with the meaning of this concept in virtually every course they take in medical school.

The 2004 NAE report on The Engineer of 2020: Visions of Engineering in the New Century emphasized that engineering education needs to cultivate not just analytic skills and technical creativity but communication skills, management leadership, and ethical professionalism. Meeting almost any of the subsequent NAE list of Grand Challenges, many engineers admit, will require extensive social context knowledge from the humanities and social sciences. The humanities are accepted as providing legitimate if subordinate service to engineering professionalism even as they are regularly shortchanged in engineering schools.

But it is a third, less instrumental justification for the humanities in engineering education that will be most important for successfully engaging the ultimate Grand Challenge of self-knowledge, that is, of thinking reflectively and critically about the kind of world we wish to design, construct, and inhabit in and through our technologies. The existential pleasures of engineering, not to mention its economic benefits, are limited. Human beings are not only geeks and consumers. They are also poets, artists, religious believers, citizens, friends, and lovers in various degrees all at the same time. The engineering curriculum should be more than an intensified vocational program that assumes students either are, or should become, one-dimensional in their lives. Engineers, like all of us, should be able to think about what it means to be human. Indeed, critical reflection on the meaning of life in a progressively engineered world is a new form of humanism appropriate to our time—a humanities activity in which engineers could lead the way.

Re-envisioning engineering

Primarily aware of requirements for graduation, engineering students are seldom allowed or encouraged to pursue in any depth the kind of humanities that could assist them, and all of us, in thinking about the relationship between engineering and the good life. They sign up for humanities classes on the basis of what fits their schedules, but then sometimes discover classes that not only provide relief from the forced march of technical work but that broaden their sense of themselves and stimulate reflection on what they really want to do with their lives. A few months ago a student in an introduction to philosophy class told me he was tired of engineering physics courses that always had to solve practical problems. He wanted to think about the nature of reality.

If he drops out of engineering, as some of my students have done, the humanities are likely to be blamed, rather than credited with expanding a sense of the world and life. The cost/benefit assessment model in colleges today is progressively coarsening the purpose of higher education. As Clark University psychologist Jeffrey Arnett argues, emerging adulthood is a period of self-discovery during which students can explore different paths in love and work. It took me seven years and three universities to earn my own B.A., years that were in no way cost/benefit-negative. Bernie Machen, president of the University of Florida, has been quoted (in the Chronicle of Higher Education) as telling students that their “time in college remains the single-best opportunity … to explore who you are and your purpose in life.” Engineering programs, because of their rigorous technical requirements, tend to be the worst offenders at cutting intellectual exploration short. This situation needs to be reversed, in the service of both engineering education and of our engineered world. If they really practiced what they preached about innovation, engineering schools would lead the way with expanded curricula and even B.A. degrees in engineering.

In physicist Mark Levinson’s insightful documentary film Particle Fever, the divide between experimentalists and theorists mirrors that between engineering and the humanities. But in the case of the Large Hadron Collider search for the Higgs’ boson chronicled in the film, the experimentalists and theorists work together, insofar as theorists provide the guidance for experimentation. Ultimately, something similar has to be the case for engineering. Engineering does not provide its own justification for transforming the world, except at the unthinking bottom-line level, or much guidance for what kind of world we should design and construct. We wouldn’t think of allowing our legislators to make laws without our involvement and consent; why are we so complacent about the arguably much more powerful process of technical legislation?

As mentioned, what Jaspers in the mid-20th century identified as an Axial Age in human history—one in which humans began to think about what it means to be human—exists today in a new form: thinking about what it means to live in an engineered world. In this second Axial Age, we are beginning to think about not just the human condition but what has aptly been called the techno-human condition: our responsibility for a world, including ourselves, in which the boundaries dissolve between the natural and the artificial, between the human and the technological. And just as a feature of the original Axial Age was learning to affirm limits to human action—not to murder, not to steal—so we can expect to learn not simply to affirm engineering prowess but to limit and steer our technological actions.

Amid the Grand Challenges articulated by the NAE there must thus be another: The challenge of thinking about what we are doing as we turn the world into an artifact and the appropriate limitations of this engineering power. Such reflection need not be feared; it would add to the nobility of engineering in ways that little else could. It is also an innovation within engineering in which others are leading the way. The Netherlands, for instance (not surprisingly, as the country that, given its dependence on the Deltawerken, comes closest to being an engineered artifact), has the strongest community of philosophers of engineering and technology in the world, based largely at the three technological universities of Delft, Eindhoven, and Twente and associated with the 3TU Centre for Ethics and Technology. China, which is undergoing the most rapid engineering transformation in world history, is also a pioneer in this field. The recent 20th-anniversary celebration of the Chinese Academy of Engineering included extended sessions on the philosophy of engineering and technology. Is it not time for the leaders of the engineering community in the United States, instead of fear-mongering about the production of engineers in China, to learn from China—and to insist on a deepening of our own reflections? The NAE Center for Engineering, Ethics, and Society is a commendable start, but one too little appreciated in the U.S. engineering education world, and its mandate deserves broadening and deepening beyond ethical and social issues.

The true Grand Challenge of engineering is not simply to transform the world. It is to do so with critical reflection on what it means to be an engineer. In the words of the great Spanish philosopher José Ortega y Gasset, in the first philosophical meditation on technology, to be an engineer and only an engineer is to be potentially everything and actually nothing. Our increasing engineering prowess calls upon us all, engineers and nonengineers alike, to reflect more deeply about who we are and what we really want to become.

Carl Mitcham ([email protected]) is professor of Liberal Arts and International Studies at the Colorado School of Mines and a member of the adjunct faculty at the European Graduate School in Saas-Fee, Switzerland.

From the Hill – Fall 2014

Budget discussions inch forward

Congress returned to Washington in September to do a little business before heading home to campaign. As usual at this time of year, there’s still quite a bit of work to do to complete the budget process for fiscal year (FY) 2015, which begins October 1. Senate Appropriations Chair Barbara Mikulski (D-MD) remains interested in a September omnibus bill that would package all or several bills into one, but the odds seem to favor the House Republicans’ preference for a continuing resolution until the new Congress takes office.

With all this uncertainty, it’s hard to say when appropriations will be finalized and what they will be. Nevertheless, enough discussion and preliminary action have taken place to provide a general picture of congressional preferences for R&D funding in FY 2015. The Senate committees have prepared budgets for the six largest R&D spending bills, which account for 97% of all federal R&D, but none of these budgets have cleared the full Senate. House committees have prepared budgets for all the major categories except the Labor, Health and Human Services (HHS), and Education bill [including the National Institutes of Health (NIH)]. The full House has approved the Defense (DOD); Energy and Water; and Commerce, Justice, and Science [which includes the National Science Foundation (NSF), national Aeronautics and Space Administration, National Institute of Standards and Technology, and National Oceanic and Atmospheric Administration] appropriations bills.

So far, according to AAAS estimates, current House R&D appropriations, which do not include NIH, would result in a 0.8% increase from FY 2014 in nominal dollars; current Senate appropriations for the same agencies would result in just a 0.1% increase. With the Labor-HHS bill included, the Senate appropriation would result in a 0.7% increase. All of these figures would be reductions in constant dollars.

Most R&D spending has followed essentially the same trajectory in recent years. After a sharp decline with sequestration in FY 2013, budgets experienced at least a partial recovery in FY 2014 and seem likely to have a small inflation-adjusted decline in FY 2015. There has been some notable variation. Funding for health, environmental, and education research has made less progress in returning to pre-sequester levels. Defense science and technology (S&T) spending neared pre-sequester levels in FY 2014 but seems likely to fall short of that mark in FY 2015. Downstream technology development funding at DOD would remain well below FY 2012 levels.

In the aggregate, FY 2015 R&D appropriations are not terribly far apart in the House and Senate. This is a departure from what happened in developing the FY 2014 budget, when the House and Senate differed on overall discretionary spending levels. This difference led to large discrepancies in R&D appropriations. The conflict over discretionary spending was resolved in last December’s Bipartisan Budget Act, and this agreement has led to the relatively similar R&D appropriations being produced by each chamber for FY 2015.

This is consistent with the idea that the primary determinant of the R&D budget is the size of the overall discretionary budget. However, it is also worth noting that the very modest nominal increase in aggregate R&D spending would still be larger than the 0.2% nominal growth projected for the total discretionary budget. Indeed, R&D in the five major nondefense bills listed above would generally beat this pace by a clear margin in both chambers, suggesting that appropriators with limited fiscal flexibility have prioritized science and innovation to some extent.

Under current appropriations, federal R&D would continue to stagnate as a share of the economy, as it would under the president’s original budget request (excluding the proposed but largely ignored Opportunity, Growth, and Security Initiative). Federal R&D, which represented 1.04% of gross domestic product (GDP) in FY 2003 at the end of the NIH budget doubling, is now below 0.8%. Both current appropriations and the president’s request would place it at about 0.75% of GDP in FY 2015. Research alone, excluding development, has declined from 0.47% of GDP in FY 2003 to 0.39% today, and current proposals would take it a bit lower, to about 0.37%.

Even though final decisions for FY 2015 appropriations are still some months away, agencies are already at work on their budget proposals for FY 2016. The administration released a set of memos outlining science and technology (S&T) priorities for the FY 2016 budget, due in February. Priorities include: advanced manufacturing; clean energy; earth observations; global climate change; information technology and high-performance computing; innovation in life sciences, biology, and neuroscience; national and homeland security; and R&D for informed policymaking and management.

Congress tackles administrative burden

In response to a March 2014 National Science Board (NSB) report on how some federal rules and regulations were placing an unnecessary burden on research institutions, the House Science, Space, and Technology Committee’s oversight and research panels held a joint hearing on June 12 on Reducing Administrative Workload for Federally Funded Research. The witnesses, including Arthur Bienenstock, the chairman of the NSB’s Task Force on Administrative Burdens; Susan Wyatt Sedwick, the president of the Federal Demonstration Partnership (FDP) Foundation; Gina Lee-Glauser, the vice president of research at Syracuse University; and Allison Lerner, the inspector general of NSF, represented stakeholders affected by changes in the oversight of federally funded research.

Concern over investigators’ administrative burdens began in 2005 when an FDP report revealed that federally funded investigators spend an average of 42% of their time on administrative tasks, dealing with a panoply of regulations in areas such as conflict of interest, research integrity, human subjects protections, animal care and use, and disposal of hazardous wastes. Despite federal reform efforts, in 2012 the FDP found that the average time spent on “meeting requirements rather than conducting research” remained at 42%. In response, the NSB convened a task force charged with investigating this issue and developing recommendations for reform.

On March 29, 2013, the task force issued a request for information (RFI) in the Federal Register, inviting “principal investigators with Federal research funding … to identify Federal agency and university requirements that contribute most to their administrative workload and to offer recommendations for reducing that workload.” The task force used responses from the RFI and information collected at three roundtables with investigators and administrators to write its report.

During the June hearing, the witnesses discussed the report’s recommendations. The four main recommendations were for policymakers to focus on the science, eliminate or modify ineffective regulations, harmonize and streamline requirements, and increase university efficiency and effectiveness.

Bienenstock of the NSB spoke about the report’s tangible suggestions, which include changing NSF’s proposal guidelines to require in the initial submission only the information necessary to determine whether a research project merits funding, deferring ancillary materials not critical to merit review; adopting a system like the FDP’s pilot project in payroll certification to replace time-consuming and outdated effort reporting; and establishing a permanent high-level interagency committee to address obsolete regulations and discuss new ones.

Sedwick echoed the usefulness of the FDP’s payroll-certification pilot and noted that the FDP is a perfect forum for testing new reporting mechanisms that could lead to a more efficient research enterprise. In her testimony, Lee-Glauser addressed how the ever more competitive funding environment is taking investigators away from their research for increasing periods of time to write grants, and noted that the current framework for regulating research on human subjects is too stringent for the low-risk social and behavioral research being performed at Syracuse University. Inspector General Lerner, championing the auditing process, spoke about the importance of using labor-effort reports to prevent fraud and noted that the Office of Management and Budget is in the process of auditing the FDP’s payroll-certification pilot project to determine its effectiveness and scalability. She also mentioned that even though requiring receipts only for large purchases made with grant money would be less time-consuming, it would not prevent investigators from committing fraud by making many small purchases. Lerner closed by reminding the room that “acceptance of public money brings with it a responsibility to uphold the public’s trust.” In addition to the payroll-certification pilot, a few other changes are in the works that could implement some of the recommendations in the NSB report. Currently, the NSF Division of Integrative Organismal Systems and Division of Environmental Biology are piloting a pre-proposal program that requires only a one-page summary and five-page project description for review.

On July 8, the House addressed the issue with its passage of the Research and Development Efficiency Act (H.R. 5056), a bipartisan bill introduced by Rep. Larry Bucshon (R-IN), which would establish a working group through the administration’s National Science and Technology Council to make recommendations on streamlining federal regulations affecting research.

– Keizra Mecklai

In brief

The House passed several S&T bills in July. These include the Department of Energy Laboratory Modernization and Technology Transfer Act (H.R. 5120), which would establish a pilot program for commercializing technology; a two-year reauthorization (H.R. 5035) for the National Institute of Standards and Technology (NIST), which would authorize funding for NIST at $856 million for FY 2015; the International Science and Technology Cooperation Act (H.R. 5029), which would establish a body under the National Science and Technology Council to coordinate international science and technology cooperative research and training activities and partnerships; the STEM Education Act (H.R. 5031), which would support existing science, technology, engineering, and mathematics (STEM) education programs at NSF and define STEM to include computer science; and the National Windstorm Impact Reduction Act (H.R. 1786) to reauthorize the National Windstorm Impact Reduction Program. The House rejected a modified version of the Securing Energy Critical Elements and American Jobs Act of 2014 (H.R. 1022), which would authorize $25 million annually from FY 2015 to FY 2019 to support a Department of Energy R&D program for energy-critical elements.

Members of the Senate, led by Sen. John D. Rockefeller (D-WV), chair of the Senate Commerce, Science, and Transportation Committee, have released their own America COMPETES reauthorization bill. The bill would authorize significant multiyear funding increases for NSF and NIST, while avoiding the changes to NSF peer review and the cuts to social science funding proposed by the House Science Committee in the Frontiers in Innovation, Research, Science, and Technology (FIRST) Act. With the short legislative calendar, progress on the bill is unlikely in the near term.

On July 25, the House Science, Space, and Technology Committee approved the Revitalize American Manufacturing and Innovation Act (H.R. 2996), which would establish a network of public/private institutes focusing on innovation in advanced manufacturing, involving both industry and academia. The creation of such a network has long been a goal of the administration, and a handful of pilot institutes have already been established. A companion bill (S. 1468) awaits action in the Senate.

On July 16, eight Senators, including Environment and Public Works Committee Ranking Member John Barrasso (R-WY), introduced a companion bill (S. 2613) to the House Secret Science Reform Act (H.R. 4012), which passed the House Science, Space, and Technology Committee along party lines on June 24. The bill would prohibit the Environmental Protection Agency (EPA) from proposing, finalizing, or disseminating regulations or assessments unless all underlying data were reproducible and made publicly available.

On June 26, Sens. Kirsten Gillibrand (D-NY) and Daniel Coats (R-IN) introduced the Technology and Research Accelerating National Security and Future Economic Resiliency (TRANSFER) Act (S. 2551). The legislation, a companion to a House bill (H.R. 2981) originally introduced last year by Reps. Chris Collins (R-NY) and Derek Kilmer (D-WA), would create a funding program within the Small Business Technology Transfer program, “to accelerate the commercialization of federally-funded research.” The grants would support efforts such as proof of concept of translational research, prototype construction, and market research.

The Department of Energy Research and Development Act (H.R. 4869), introduced in the House by Rep. Cynthia Lummis (R-WY) on June 13, would authorize a 5.1% budget increase over the FY 2014 level for the Office of Science and 14.3% cut in the Advanced Research Projects Agency–Energy budget. In the subcommittee’s summary, Section 115 “directs the Director to carry out a program on biological systems science prioritizing fundamental research on biological systems and genomics science and requires the Government Accountability Office (GAO) to identify duplicative climate science initiatives across the federal government. Section 115 limits the Director from approving new climate science-related initiatives unless the Director makes a determination that such work is unique and not duplicative of work by other federal agencies. This section also requires the Director to cease all climate science-related initiatives identified as duplicative in the GAO assessment unless the Director determines such work to be critical to achieving American energy independence.”

Executive actions support Obama’s science agenda

In an effort to circumvent a deadlocked Congress, President Obama has issued a number of executive actions to advance his science policy goals. After the DREAM Act immigration bill stalled in 2012, the President issued the Deferred Action for Childhood Arrivals, which allows undocumented individuals in the United States to become eligible for employment authorization (though not permanent residency) if they were under age 31 on June 15, 2012; arrived in the United States before turning 16 years of age; have lived in the United States since June 15, 2007; and are currently in school or hold a GED or higher degree, among other requirements. This step toward immigration reform may allow undocumented residents with STEM degrees or careers to stay in the country and continue to support the American STEM workforce. Then, in 2013, once again in response to failed legislation and the tragic shooting in Newtown, CT, Obama took action on gun control. Among other things, he lifted what amounted to a ban on federally funded research about the causes of gun violence.

Most recently, the EPA released a proposed rule to reduce carbon emissions by 30% below 2005 levels by 2030, as directed by the President’s executive actions contained in his Climate Action Plan. The rule would allow each state to implement a plan that works best for its economy and energy mix, and has been a source of controversy on Capitol Hill; members of Congress and other stakeholders are already engaged in a heated debate as to whether the EPA has authority (through the Clean Air Act) to regulate greenhouse gas emissions.

Agency updates

On July 23, U.S. Department of Agriculture (USDA) Secretary Tom Vilsack announced the creation of the Foundation for Food and Agricultural Research (FFAR) to facilitate the support of agriculture research through both public and private funding. FFAR, authorized in the 2014 Farm Bill, will be funded at $200 million and must receive matching funds from nonfederal sources when making awards for research.

NIH is teaming up with NSF to launch I-Corps at NIH, a pilot program based on NSF’s Innovation Corps. The program will allow researchers with Small Business Innovation Research and Small Business Technology Transfer (SBIR/STTR) Phase 1 awards, which establish feasibility or proof of concept for technologies that could be commercialized, to enroll in a training program that helps them explore potential markets for their innovations.

In response to the June 16 National Academies report on the National Children’s Study, a plan by NIH to study the health of 100,000 U.S. babies up to age 21, NIH Director Francis Collins decided to put the ambitious study, which has already faced more than a decade of costly delays, on hold. The Academies panel indicated that the study’s hypotheses should be more scientifically robust and that the study would benefit from more scientific expertise and management. It also recommended changes to the subject recruitment process.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

Forum – Fall 2014

Climate deadlock

In “Breaking the Climate Deadlock” (Issues, Summer 2014), David Garman, Kerry Emanuel, and Bruce Phillips present a thoughtful proposal for greatly expanded public- and private-sector R&D aimed at reducing the costs, increasing the reliability, managing the risks, and expanding the potential to rapidly scale up deployment of a broad suite of low- and zero-carbon energy technologies, from renewables to advanced nuclear reactor technologies to carbon capture and storage. They also encourage dedicated funding of research into potential geoengineering technologies for forced cooling of the climate system. Such an “all-of-the-above” investment strategy, they say, might be accepted across the political spectrum as a pragmatic hedge against uncertain and potentially severe climate risks and hence be not only sensible but feasible to achieve in our nation’s highly polarized climate policy environment.

It is a strong proposal as far as it goes. Even as the costs of wind and solar photovoltaics are declining, and conservative states such as Texas and Kansas are embracing renewable energy technologies and policies, greater investment in research aimed at expanding the portfolio of commercially feasible and socially acceptable low-carbon electricity is needed to accelerate the transition to a fully decarbonized energy economy. And managing the risks of a warming planet requires contingency planning for climate emergencies. As challenging as it may be to contemplate the deployment of most currently proposed geoengineering schemes, our nation has a responsibility to better understand their technical and policy risks and prospects should they ultimately need to be considered.

But it does not go far enough. Garman et al.’s focus on R&D aimed primarily at driving down the “cost premium” of low-carbon energy technologies relative to fossil fuels neglects the practical need and opportunity to also incorporate into the political calculus the substantial economic risks and costs of unmitigated climate change. Yet these risks and costs are substantial and are becoming increasingly apparent to local civic and political leaders in red and blue states alike as they are faced with more extensive storm surges and coastal flooding, more frequent and severe episodes of extreme summer heat, and other climate-related damages.

The growing state and local experience of this “cost of inaction premium” for continued reliance on fossil fuels is now running in parallel with the experience of economic benefits resulting from renewable electricity standards and energy efficiency standards in several red states. Together, these state and local experiences may do as much as or more than expanding essential investments in low-carbon energy R&D to break the climate deadlock and rebuild bipartisan support for sensible federal climate policies.

PETER C. FRUMHOFF
Director of Science and Policy Union of Concerned Scientists Cambridge, Massachusetts
[email protected]

 

We need a new era of environmentalism to overcome the polarization surrounding climate change issues, one that takes conservative ideas and concerns seriously and ultimately engages ideological conservatives as full partners in efforts to reduce carbon emissions.

Having recently founded a conservative animal and environmental advocacy group called Earth Stewardship Alliance (esalliance.org), I applaud “Breaking the Climate Deadlock.” The authors describe a compelling policy framework for expanding low-carbon technology options in a way that maintains flexibility to manage uncertainties.

5

The article also demonstrates the most effective approach to begin building conservative support for climate policies in general. The basic elements are to respect conservative concerns about climate science and to promote solutions that are consistent with conservative principles. Although many climate policy advocates see conservatives as a lost cause, relatively little effort has been made to try this approach.

Thoughtful conservatives generally agree that carbon emissions from human activities are increasing global carbon dioxide levels, but they question how serious the effects will be. These conservatives are often criticized for denying the science even though, as noted by “Breaking the Climate Deadlock,” there is considerable scientific uncertainty surrounding the potential effects. This article, however, addresses this legitimate conservative skepticism by describing how a proper risk assessment justifies action to avoid potentially catastrophic impacts even if there is significant uncertainty.

The major climate policies that have been advanced thus far in the United States are also contrary to conservative principles. All of the cap-and-trade bills that Congress seriously considered during the 2000s would have given away emissions allowances, making the legislation equivalent to a tax increase. The rise in prices caused by a cap-and-trade program’s requirement to obtain emissions allowances is comparable to a tax. Giving away the allowances foregoes revenue that could be used to reduce other taxes and thus offset the cap-and-trade tax. Many climate policy advocates wanted the allowances to be auctioned, but that approach could not gain traction in Congress, because the free allowances were needed to secure business support.

After the failure of cap-and-trade, efforts turned to issuing Environmental Protection Agency (EPA) regulations that reduce greenhouse gas emissions. The EPA’s legal authority for the regulations is justified by some very general provisions of the Clean Air Act. Although the courts will probably uphold many of these regulations, the policy decisions involved are too big to be properly made by the administration without more explicit congressional authorization.

Despite the polarization surrounding climate change, there continues to be support in the conservative intelligentsia for carbon policies consistent with their principles: primarily ramping up investment in low-carbon technology research, development, and demonstration and a “revenue-neutral” carbon tax in which the increased revenues are offset by cutting other taxes.

Earth Stewardship Alliance believes the best way to build strong conservative support for these policies is by making the moral case for carbon emissions reductions, emphasizing our obligation to be good stewards. We are hopeful that conservatives will ultimately decide it is the right thing to do.

JIM PRESSWOOD
Executive Director Earth Stewardship Alliance Arlington, Virginia
[email protected]

 

David Garman, Kerry Emanuel, and Bruce Phillips lay out a convincing case for the development of real low-carbon technology options. This is not just a theoretical strategy. There are some real opportunities before us right now to do this, ones that may well appeal across the political spectrum:

The newly formed National Enhanced Oil Recovery Initiative (a coalition of environmental groups, utilities, labor, oil companies, coal companies, and environmental and utility regulators) has proposed a way to bring carbon capture and storage projects to scale, spurring in-use innovation and driving costs down. Carbon dioxide captured from power plants has a value—as much as $40 per ton in the Gulf region—because it can be used to recover more oil from existing fields. Capturing carbon dioxide, however, costs about $80 per ton. A tax credit that would cover the difference gap could spur a substantial number of innovative projects. Although oil recovery is not the long-term plan for carbon capture, it will pay for much capital investment and the early innovation that follows in its wake. The initiative’s analysis suggests that the net impact on the U.S. Treasury is likely to be neutral, because tax revenue from domestic oil that displaces imports can equal or exceed the cost of the tax credit.

There are dozens of U.S.-originated designs for advanced nuclear power reactors that could dramatically improve safety, lower costs, and shrink wastes as well as making them less harmful. The cost of pushing these designs forward to demonstration are modest, likely in the range of $1 billion to $2 billion per year, or about half a percent of the nation’s electric bill. The United States remains the world’s center of nuclear innovation, but many companies, frustrated by the lack of U.S. government support, are looking to demonstrate their first-of-a-kind designs in Russia and China. This is a growth-generating industry that the United States can recapture.

The production tax credit for conventional wind power has expired, due in part to criticisms that the tax credit was simply subsidizing current technology that has reached the point of diminishing cost reductions. But we can replace that policy with a focused set of incentives for truly innovative wind energy designs that increase capacity and provide grid support, thus enhancing the value of wind energy and bringing it closer to market parity.

Gridlock over climate science needn’t prevent practical movement forward to hedge our risks. A time-limited set of policies such as those above would drive low-carbon technology closer to parity with conventional coal and gas, not subsidize above-market technologies indefinitely. Garman and his colleagues have offered an important bridge-building concept; it is time for policymakers to take notice and act.

ARMOND COHEN
Executive Director Clean Air Task Force Boston, Massachusetts
[email protected]

21st Century Inequality: The Declining Significance of Discrimination

Today I want to talk about inequality in the 21st century, in particular on the decline in the significance of discrimination and the increase in the significance of human capital.

Let me start with some basic facts about the achievement gap in America. If you listen to NPR or tune into 60 Minutes, you probably get a sense that the United States is lagging behind other countries in student achievement and that there is a disturbing difference in the performance of racial groups.

For example, on average 44% of all students, regardless of race, are proficient in math or reading in 8th grade. That’s disheartening, but far from the worst news. In Detroit, 3% of black 8th graders are considered proficient in math—that’s 3%. In some places, such as Cleveland, the achievement gap between white and black students is relatively small, but the reason is that the white students are not doing well either. In the District of Columbia, roughly 80% of white 8th graders, but only 8% of their black classmates, are proficient in math.

Many people will object that test scores do not measure the whole child. That’s true, but I will argue that they are important.

My early training and research in economics was not linked to education, but I was asked in 2003 to explore the reasons for the social inequality in the United States. I began by looking at the National Longitudinal Survey of Youth, focusing on people who were then 40 years old. Compared to their white contemporaries, blacks earned 28% less, were 27% less likely to have attended college, were 190% more likely to be unemployed, and 141% more likely to have been on public assistance. These grim statistics are well known and are often used to illustrate the power of racial bias in U.S. society.

I decided to trace back through the lives of this cohort to try to identify the source of these disparities. One obvious place to look was educational achievement. I went back to the test scores of this cohort when they were in 8th grade and did some calculations. If one compared blacks and whites who had the same test scores in 8th grade, the picture at age 40 was dramatically different. The difference in wages was 0.6%, the difference in unemployment was 90%, the difference in public assistance was 33%, and blacks were actually 137% more likely to have attended college.

That was easy. In two weeks I reported back that achievement gaps that were evident at an early age correlated with many of the social disparities that appeared later in life. I thought I was done. But the logical follow-up question was how to explain the achievement gap that was apparent in 8th grade. I’ve been working on that question for the past 10 years.

I am certainly not going to tell you that discrimination has been purged from U.S. culture, but I do believe that these data suggest that differences in student achievement are a critical factor in explaining many of the black-white disparities in our society. It is no longer news that the United States is a lackluster performer on international comparisons of student achievement, ranking about 20th in the world. But the position of U.S. black students is truly alarming. If they were to be considered a country, they would rank just below Mexico in last place among all Organization of Economic Cooperation and Development countries.

How did it get this way? When do U.S. black students start falling behind? It turns out that development psychologists can begin assessing cognitive capacity of children when they are only nine months old with the Bayley Scale of Infant Development. We examined data that had been collected on a representative sample of 11,000 children and could find no difference in performance of racial groups. But by age two, one can detect a gap opening, which becomes larger with each passing year. By age five, black children trail their white peers by 8 months in cognitive performance, and by eighth grade the gap has widened to twelve months

Remember, Horace Mann told us that public education was going to be the great equalizer; it was going to compensate for the inequality caused by differences in income across zip codes. That was the dream.

Unfortunately, what happens is that the inequality that exists when children begin school becomes even greater during schooling. The gap grows not only across schools, but within the same school, even with the same teacher. This means that even for children from the same neighborhood, the same school, and the same teachers, academic performance diverges each year in school.

I spent two or three years trying to figure out what factors could explain this predicament. I looked at whether or not teachers were biased against some kids or groups. I looked at whether or not kids lost ground during the summer. I looked at various measures of school quality. I looked at the results of numerous different types of standardized test. None of these could explain why certain groups, blacks in particular, were losing ground to their peers.

TO REINFORCE HIGH EXPECTATIONS, WE AIMED TO CREATE AN ENVIRONMENT THAT REFLECTED SERIOUSNESS. WE ELIMINATED GRAFFITI AND REMOVED THE BARBED WIRE THAT SURROUNDED SOME OF THE SCHOOLS. WE REGULARLY REPEATED THE GOALS THAT WE EXPECTED STUDENTS TO ACHIEVE.

When I was presenting this finding at a meeting, a woman challenged me to stop focusing on our failures and to let audiences know what works. I said “OK, but what works?” She said more education for teachers, increased funding, smaller class size. I recognized this as the conventional wisdom, but I thought I better examine the data that demonstrate that these strategies are effective.

I discovered that we have actually implemented this approach for many decades. The percentage of teachers with a master’s degree increased from 23% in 1961 to 62% in 2006. The average class size has declined from 22 to 16 students since 1970. Per pupil annual spending grew from $5,000 in 1970 to $12,000 in 2008 in constant dollars. In spite of applying this apparently sound advice, overall student academic achievement has remained essentially flat. Clearly, we need to try something else.

As befits an arrogant economist, my first thought was that this will be easy: We just have to change the incentives. Let’s apply a rational agent model and examine the calculation we are asking students to make. Society is telling them that they will be rewarded for their efforts in school in 15 years when they enter the labor market. As an economist I know that no one has a discount rate that would justify waiting 15 years for a payoff. My solution was to propose that we pay them incentives now to reward good school performance.

Oh my gosh, I wish someone had warned me. No one told me this was going to be so incredibly unpopular. People were picketing me outside my house saying I would destroy students’ love of learning, that I was the worst thing for black people since the Tuskegee experiments. Really? Experimenting with incentives when nothing else seems to work is the equivalent of injecting people with syphilis without informing them?

We decided to try the experiment and raised about $10 million. We provided incentives in Dallas, Houston, Washington, DC, New York, and Chicago. We also, just for fun, added a large experiment with teacher incentives just to cover all our bases, to make sure that we had paid everybody for everything.

The question for us was, first of all, could incentives increase achievement? Second, what should we pay for and how should we structure the incentives? The conventional economic theory is that we should pay for outputs. It follows from that—don’t laugh—that kids should borrow money based on their expected future earnings to pay for tutors or make other investments in their learning to improve their performance. We took a more direct approach, conducting randomized trials that primarily paid for inputs.

In Dallas we paid kids $2 for each book they read. They had to take a test to verify that they actually did the reading. In Houston we paid kids to do their math homework. In Washington we paid kids to attend school, complete homework, score well on tests, and avoid activities such as fighting. We also tried incentives for outputs. In New York we paid kids for good test scores so that the emphasis was completely on outputs. In Chicago we paid ninth graders half the money for attendance and the second half for graduation. The amounts were generous for poor kids. A Washington middle schooler could earn as much as $2,000 per year. In New York, fourth grades could make up to $250 and seventh graders up to $500.

Throughout the experiment we were bombarded with complaints from adults, particularly those who did not have children in the experiment. We never had a kid complain. Well, once we did. I came to one Washington school to participate in a ceremony at which checks were distributed. Before the event started, one kid came up to me and said, “Professor, I don’t think we should be paid to come to school. I think we should pay to come to school because school is such a valuable resource. You should not pay us. We should pay you.”

I was blown away by this. I thought this kid really gets it. About 20 minutes later I was distributing checks in the cafeteria. Kids names were called, and they ran or danced to the front of the room. I called the kid’s name, and he came up. I put his check in my pocket. He said, “What are you doing?” I told him that just 20 minutes earlier he had told me that he should pay me for the privilege of coming to school. He looked at me in a way that only an 11-year-old can and said, “I never said that.”

We found that incentives, if designed correctly, can have a positive return on investment. However, they are not going to close the big gaps that exist between blacks and whites. We did learn that it is more effective to provide kids with incentives for inputs rather than outputs. This contradicts what I learned in my economics training, but it was very clear when I actually talked to the kids. I asked one kid in Chicago, where they were paid for outputs, Did you stay after school and ask your teacher for extra time? No. Did you borrow against your expected income and hire a tutor? No. What did you do? Basically, I came. I tried harder. School was still hard. At some point, I gave up.

The reality is that most of these kids do not know how to get from point A to point B. The assumption that economists make when designing incentives is that people know how to produce the desired output, that they know the “production function.” When they don’t know that, designing incentives is incredibly difficult.

What we learned through this $10 million and a lot of negative press and angry citizens is that kids will respond to incentives—and that incentives to teachers do not have a significant effect on student achievement. They will do exactly what you want them to do. By the way, they don’t do anything extra either. I had this idea that they were going to discover that school is great and to try harder in all of their subjects, even those that do not provide incentives. No. You offer $2 to read a book, and they read a book. They are going to do exactly what you want them to do. That showed me the power, and the limitations, of incentives for kids. I saw that if you really squinted and designed them perfectly, incentives would have a high return on investment because they are so cheap, but they were never going to close the gap.

Something new and different

At the same time I was writing up my incentives paper, I started doing the analysis of Geoffrey Canada’s work in the Harlem Children’s Zone. This changed my entire research trajectory.

With the help of large philanthropic contributions, Canada had developed a creative and ambitious approach to education. A group of Harlem students were randomly selected to attend Canada’s charter schools beginning in 6th grade. A couple of things are important here. One, the lottery winners and losers were, if anything, slightly below the New York City average. This is significant because the students that enroll in charter schools are often above-average achievers from the start.

The evidence of improvement can be seen in the first year, and the gains are even better in the second year. By year three, these students have essentially reached the level of the average white New York student.

Now, I haven’t controlled for anything. If I were to include factors such as eligibility for free lunch, the black students would be slightly outperforming the white students. Their performance in reading improved but not nearly as much as it did in math. I would summarize the results in these simple terms: After three years in Canada’s Promise Academy Charter Schools, the students were able to erase the achievement gap in math and to cut it by a third in reading.

I had never seen results that came close to this. When I first saw the numbers, I thought my research assistant had made a coding error. This was a reason to get excited about the possibility of make a big difference in children’s lives.

Further research into public charter schools enabled me to see that this not just about the Harlem Children’s Zone. Although the average charter school is statistically no better than the average regular public school, there are a number of charter schools achieving the type of results we found in the Harlem Children’s Zone. The research challenge is to identify what they are doing that works.

Let me stop for a story. My grandmother makes a fabulous coconut cake, so I asked her for the recipe. She told me what she does with a finger full of this and palm full of that. When I tried it, the result was a cement block, so I decided that the only way to learn the recipe was to watch her make it. When she grabbed a palm of coconut flakes, I made her put it in a measuring cup. For your future reference, a grandmother’s palm is equal to a quarter cup. It took a long time and annoyed my grandmother, but now I have a recipe I can use and pass down to my children.

If you ask Geoffrey Canada what’s in his secret education sauce, he will say a little bit of this, a little bit of that. You will be moved by his powerfully inspirational speeches, but you will not learn how to build a better school. You’ll just wish that you were also a genius.

To help the rest of us who are not geniuses, we assembled a research team that spent two years examining in detail what was happening at charter schools, some good and some not so good. We hung around. We used video cameras. We interviewed the kids. We interviewed the teachers. We interviewed the principals. We spent hours in these schools trying to figure out what the good ones did and what the not-so-good ones didn’t do.

We found a number of practices that were clearly correlated with better student performance. For teachers, it is important that they receive reliable feedback on their classroom performance and that they rigorously apply what they learn from assessments of their students to what they do in the curriculum and the classroom.

Even low-performing schools know that data are important. When I visited a middling school, they would be eager to show me their data room. What I typically found was wall charts with an array of green, yellow, and red stickers that represented high-, mid-, and low-performing students, respectively. And when I asked what has this led you to do for red kids, they would say that they hadn’t reached that step yet, but at least they knew how many there are.

When I asked the same question in the data rooms of high-performing schools, they would say that they have their teaching calibrated for the three blocks. They would not only identify which students were trailing behind, but would identify the pattern of specific deficiencies and then provide remediation for two or three days on the problem areas. They would also note the need to approach these areas more diligently in future editions of the course.

The third effective practice was what I call tutoring, but which those in the know call small learning communities. It is tutoring. Basically what they do is work with kids in groups of six or fewer at least four days per year.

The fourth ingredient was instructional time. Simple. Effective schools just spent more time on tasks. I think of it as the basic physics of education. If your students are falling behind, you have two choices: spend more time in school or convince the high-performing schools to give their kids four-day weekends. The key is to change the ratio.

The icing on the cake was that effective schools had very, very high expectations of achievement regardless of their social or economic background. My father went to prison when I was a kid. I didn’t meet my mother until I was in my twenties. Fortunately, I had a grandmother who didn’t know the meaning of the word excuse. A high school counselor who was aware of my situation tried to help me by saying that I could be part of a special program that would require only a half day of school and reduce my work load. I knew my grandmother wouldn’t buy that, so I refused.

The essential finding is that kids will live up or down to our expectations. Of course they are dealing with poverty. Of course 90% of the kids have single female head of households. They all have that. That wasn’t news. The question is how are we going to educate them?

We met incredible educators who not only understood the big picture but sweated all the details. One principal had developed a very clever and efficient method for distributing worksheets, exams, and other handouts in class. I’ve never worried about that, so I asked what was the point. She said that every teacher does this in every class many times a day. If we can save 30 seconds each time, we will add several days of productive class time over the course of a year, and these kids need every minute we can give them.

Testing the thesis

I believe there is real value in analyzing the data that provides the evidence that these five strategies work, but there is nothing very surprising or counterintuitive in the findings. The question is why so few schools are implementing these practices.

We set out to discover if there was any reason that public schools could not implement these practices and achieve the expected results. We approached a number of school districts to ask if we could conduct an experiment applying these techniques in some of their schools. I won’t belabor all the reasons we heard for why it was impossible, but suffice it to say that we were not welcomed with open arms. Apparently, it is not practical to increase time in school, provide tutoring, give teachers regular feedback and guidance, use data to inform instructional practice, and increase expectations.

We did eventually find a willing partner in the Houston school district, where the superintendent and the school board were willing to give it a try. We began to work in 20 schools, including four high schools, with a total of 16,000 students. These are traditional public schools. There is no waiting list. There is no sign up. There is no Superman. Nothing complicated. These are just ordinary neighborhood public schools.

All of the schools were performing below expectations and were in line to be taken over by the state. They qualified for the federal dollars for turning schools around. As part of that program, all of the principals and about half the teachers were replaced.

We increased the school day by one hour. We lengthened the school year by two weeks. We also cut down on some curious non-instructional activities. We discovered, for example, that 20 minutes is set aside each day for bathroom breaks. For no additional cost you can increase instructional time just by making kids pee more quickly. How cool is that?

I SAW THAT IF YOU REALLY SQUINTED AND DESIGNED THEM PERFECTLY, INCENTIVES WOULD HAVE A HIGH RETURN ON INVESTMENT BECAUSE THEY ARE SO CHEAP, BUT THEY WERE NEVER GOING TO CLOSE THE GAP.

Second, small group tutoring. We hired more than 400 full-time tutors. They worked with ten kids a day, two at a time during five of the day’s six periods. We offered a $20,000 salary even though we were told that no one would do the job for that amount. In five weeks, we had 1,200 applications. Some were young Teach for America types. Others were retirees from the Johnson Space Center. We decided to focus on math tutoring in what we had found were the critical fourth, sixth, and ninth grades.

For data-driven instruction, we worked with the existing requirements for the Houston schools. For example, Houston sets 212 objectives that fifth graders are expected to achieve. We designed a schedule that would make it possible to reach all the objectives while also including remediation for students and professional development for teachers. A feedback system was designed that resulted in teachers receiving ten times as much feedback as teachers in other Houston schools.

To reinforce high expectations, we aimed to create an environment that reflected seriousness. We eliminated graffiti and removed the barbed wire that surrounded some of the schools. We regularly repeated the goals that we expected students to achieve.

The experiment had a couple of potential fault lines. One, we were taking best practices out of charter schools and trying to implement them in traditional public schools. It could be that those best practices work only with a set of highly motivated teachers and parents. We weren’t sure about that. Second, we had to face all the political realities of a traditional public school. During the three-year experiment, I aged about 24 years. I will never be the same.

But the results made it worth the effort. When we began, the black/white achievement gap in the elementary schools was about 0.4 standard deviations, which is equivalent to about 5 months. Over the three years, our elementary schools essentially eliminated the gap in math and made some progress in reading. In secondary schools, math scores rose at a rate that would close the gap in in roughly four to five years, but there was no improvement in reading. One other significant result was that 100% of the high school graduates were accepted to a two- or four-year college.

Let me put it in context for you. The improvement in student achievement in the Houston schools where we worked was roughly equivalent to the results in the Harlem Children’s Zone and in the average KIPP charter school. But we did this with 16,000 kids in traditional public schools. We are now repeating the experiment in Denver, Colorado, and Springfield, Massachusetts. We actually do know what to do, especially for math. The question is whether or not we have the courage to do it.

The last thing I will show you is a return on investment calculation for a variety of interventions. We calculated what a given level of improvement in achievement would mean for a student’s lifetime earnings and what that would mean for government income tax revenue. Reducing class size costs about $3,500 per kid and results in an ROI of about 6.2%, which is better than the long-term stock market return of about 5%. Expanded early childhood education has an ROI of 7.6%, an even better investment.

“No excuses” charter schools cost about $2,500 per kid and have an ROI of 18.5%. Using the same methodology, we calculated that the investment in our Houston schools had an ROI of 13.4% in the secondary schools and 26.7% in the elementary schools. But that was based on the implementation cost, which I raised from private sources. Houston did not spend anything more per student, so its ROI was infinite.

My journey into education has been similar to that of many other people. I was frustrated with the data, frustrated that we didn’t know which of the scores of innovations were most effective. We took the simple approach of looking closely at the schools that were producing the results we all want to see.

We found five actions that explain roughly 50% of the variation among charter schools. We then conducted an experiment to see if those same five actions would have the same result in a typical urban public school system. The results are truly encouraging. In three years these public school students made remarkable progress in math achievement and some improvement in reading. That’s not everything, but it is far more than what was achieved in decades with the conventional wisdom of smaller classes, more teacher certification, and increased spending.

It is not rocket science. It is not magic. There is nothing special about it. When the film Waiting for Superman came out, people complained that the nation is undersupplied with supermen. But an ordinary nerd like me was able to uncover a simple and readily repeated recipe for progress. Anyone can do this stuff.

One last story. During the experiment in Houston, an education commissioner from another state came to tour Robinson elementary school, one of the toughest in the city. He knew Houston and was familiar with Robinson. At the end of the tour, he pulled me aside. He had one question: “Where did you move the kids who used to go to school here?” I said that these are all the same kids, but they behave a lot differently when we do our jobs properly. They are listening. They are learning. They will live up to the expectations that we have for them.

I was a kid who went to broken schools. Thanks to my grandmother and some good luck, I beat the odds. But one success story is not what we want. What we want are rigorously evaluated, replicable, systematic educational practices that will change the odds.

Science: Too Big for Its Britches?

Science ain’t what it used to be, except perhaps in the systems we have for managing it. The changes taking place are widely recognized. The enterprise is becoming larger and more international, research projects are becoming more complex and research teams larger, university-industry collaboration is increasing, the number of scientific journals and research papers published is growing steadily, computer modeling and statistical analysis are playing a growing role in many fields, interdisciplinary teams are becoming more numerous and more heterogeneous, and the competition for finite resources and for prime research jobs is intensifying.

Many of these trends are the inevitable result of scientific progress, and many of them are actually very desirable. We want to see more research done around the world, larger and more challenging problems studied, more science-enabled innovation, more sharing of scientific knowledge, more interaction among disciplines, better use of computers, and enough competition to motivate scientists to work hard. But this growth and diversification of activities is straining the existing management systems and institutional mechanisms responsible for maintaining the quality and social responsiveness of the research enterprise. One undesirable trend has been the growth of attention in the popular press to falsified research results, abuse of human and animal research subjects, conflict of interest, the appearance of irresponsible journals, and complaints about overbuilt research infrastructure and unemployed PhDs. One factor that might link these diverse developments is the failure of the management system to keep pace with the changes and growth in the enterprise.

The pioneering open access journal PLOS ONE announced in June 2014 that after seven and a half years of operation it had published 100,000 articles. There are now tens of thousands scientific journals, and more than 1 million scientific papers will be published in 2014. Maintaining a rigorous review system and finding qualified scientists to serve as reviewers is an obvious challenge, particularly when senior researchers are spending more time writing proposals because constrained government spending has caused rates of successful funding to plummet in the United States.

Craig Mundie, the former chief research and strategy officer at Microsoft and a member of the President’s Council of Advisers on Science and Technology (PCAST), has voiced his concern that the current review system is not designed to meet the demands of today’s data-intensive science. Reviewers are selected on the basis of their disciplinary expertise in particle physics or molecular biology, when the quality of the research actually hinges on the design and use of the computer models. He says that we cannot expect scholars in those areas to have the requisite computer science and statistics expertise to judge the quality of the data analysis.

Data-intensive research introduces questions about transparency and the need to publish results of every experiment. Is it necessary to publish all the code of the software used to conduct a big data search and analysis? If a software program makes it possible to quickly conduct thousands of runs with different variables, is it necessary to make the results of each run available? Who is responsible for maintaining archives of all data generated in modeling experiments? Many scientists are aware of these issues and have been meeting to address them, but they are still playing catch-up with fast-moving developments.

In the past several decades the federal government’s share of total research funding fell from roughly 2/3 to 1/3, and industry now provides about 2/3. In this environment it is not surprising that university researchers seek industry support. It is well understood that researchers working in industry do not publish most of their work because it has proprietary value to the company, but the ethos of university researchers is based on openness. In working with industry funders, university researchers and administrators need the knowledge and capacity to negotiate agreements that preserve this principle.

About 1/3 of the articles being published by U.S. scientists have a coauthor from another country, which raises questions about inconsistencies in research and publishing procedures. Countries differ in their practices such as citing references on proposals, attributing paraphrases of text to its original source, listing lab directors as authors whether or not they participated in the research. Failure to understand these differences can lead to inadequate review and oversight. Similar differences in practice exist across disciplines, which can lead to problems in interdisciplinary research.

Globalization is also evident in the movement of students. The fastest growing segment of the postdoctoral population is comprised of people who earned their PhDs in other countries. Although they now comprise more than half of all postdocs, the National Science Foundation tracks the career progress only of people who earned their PhDs in the United States. We thus know little about the career trajectories of the majority of postdocs. It would be very useful to know why they come to the United States, how they evaluate their postdoctoral experience, and what role they ultimately play in research. This could help us answer the pressing question of the extent to which the postdoctoral is serving as a useful career-development step or whether its primary function is to provide low-cost research help to principal investigators.

The scientific community has fought long and hard to preserve the power to manage its own affairs. It wants scientists to decide which proposals deserve to be funded, what the rules for transparency and authorship should be in publishing, what behavior constitutes scientific misconduct and how it should be punished, and who should be hired and promoted. In general it has used this power wisely and effectively. Public trust is higher in science than in almost any other profession. Although science funding has suffered in the recent period of federal budget constraint, it has fared better than most areas of discretionary spending.

Still, there are signs of concern. The October 19, 2013, Economist carried a cover story on “How science goes wrong,” identifying a range of problems with the current scientific enterprise. Scientists themselves have published articles that question the reproducibility of much research and that note worrisome trends in the number of articles that are retracted. A much-discussed article in the Proceedings of the National Academy of Sciences by scientific superstars Harold Varmus, Shirley Tilghman, Bruce Alberts, and Howard Kutcher highlighted serious problems in biomedical research and worried about overproduction of PhDs. Members of Congress are making a concerted effort to influence NSF funding of the social sciences, and climate change deniers would jump at the opportunity to influence that portfolio. And PCAST held a hearing at the National Academies to further explore problems of scientific reproducibility.

Because its management structure and systems have served science well for so long, the community is understandably reluctant to make dramatic changes. But we have to recognize that these systems were designed for a smaller, simpler, and less competitive research enterprise. We should not be surprised if they struggle to meet the demands of a very different and more challenging environment. For research to thrive, it requires public trust. Maintaining that trust will require that the scale and nature of management match the scale and nature of operations.

We all take pride in the increasingly prominent place that science holds in society, but that prominence also brings closer scrutiny and responsibility. The Internet has vastly expanded our capacity to disseminate scientific knowledge, and that has led many people to know more about how research is done and decisions are made. In rethinking how science is managed and preserving its quality, the goal is not to isolate science from society. We build trust by letting people see how rigorously the system operates and by listening to their ideas about what they want and expect from science. The challenge is to craft a management system that is adequate to deal with the complexities of the evolving research enterprise and also sufficiently transparent and responsive to build public trust.

Saturday Night Live once did a mock commercial for a product called Shimmer. The wife exclaimed, “It’s a floor wax.” The husband bristled, “No, it’s a dessert topping.” After a couple of rounds, the announcer interceded: “You’re both right. It’s a floor wax and a dessert topping.” Fortunately, the combination of scientific rigor and social responsiveness is not such an unlikely merger.

Imagining the Future City

A rich blend of engaging narrative and rigorous analysis can provide decisionmakers with the various perspectives they need when making choices with long-range consequences for cities around the world.

An ashen sky gives way to streaks of magenta and lilac across the Phoenix cityscape in 2050. L’yan, one of millions of late-night Creators, walks slowly through the fields of grass growing in the elevated honeycomb transportation network on her way back from the late-night block party. L’yan has only a short trip to her pad in downtown Phoenix. She, along with 10,000,000 fellow Creators, has just beaten the challenge posted on the PATHWAY (Privileged Access-The Hacker WAY) challenge board. L’yan shivers, a cool breeze and the feeling of success washing over her. She had gained PATHWAY access during her ninth year in the online Academy of Critically Adaptive trans-Disciplinary Engineering, Mathematics, Informatics, & Arts (ACADEMIA). She dropped out after achieving Creator status. Who needs a doctorate if you have access to PATHWAY challenges? Research funds are no longer tied up in disciplinary colleges and universities. In Phoenix, as in many innovation centers around the world, social stratification is not any longer determined by race, gender, or family wealth; instead, it is based on each person’s skills in problem-solving and adaptive learning, their ability to construct and shape materials, and to write and decipher code. Phoenix embraces the ideals of individual freedom and creativity, and amended zoning in 2035 to allow pads (building sites) for Creators to build towers. Pads are the basis of innovation and are the foundation blocks for the complex network of interconnected corridors that hover above the aging city streets. Today, in 2050, the non-Creators, the squares, live in relics, detached houses, off-pad in the old (2010 era) suburbs at the periphery of the city center.

Science fiction uses personal narratives and vivid images to create immersive experiences for the audience. Scientific scenarios, on the other hand, most often rely on predictive models that capture the key variables of the system being projected into the future. These two forms of foresight—and the people who practice them—typically don’t engage with one another, but they should.

Scientific scenarios are typically illustrated by an array of lines on a graph representing a range of possible futures; for example, possible changes in greenhouse gas emissions and atmospheric temperatures over the next several decades. Although such a spectrum of lines may reflect the results of sophisticated climate models, it is unlikely to communicate the information decisionmakers need for strategizing and planning for the future. Even the most sophisticated models are simplifications of the forces influencing future outcomes. They present abstract findings, disconnected from local cultural, economic, or environmental conditions. A limited number of continuous lines on a graph also communicate a sense of control and order, suggesting that today’s choices lead to predictable outcomes.

Science fiction stories, in contrast, can use rich and complex narratives to envision scenarios that are tangible and feel “real.” Yet science fiction also has its obvious limits as a foresight tool. To be effective, it must be driven by narrative, not by science or the concerns of policymakers. Scenarios constructed through collaborations that draw from the strengths of science and science fiction can help decisionmakers and citizens envision, reflect, and plan for the future. Such rich and embedded scenarios can reveal assumptions, insights, and questions about societal values. They can explore a society’s dependence on technology, its attitudes about the market, or its capacity to effect social change through policy choices. Scenarios can challenge linear cause-effect thinking or assumptions about rigid path dependencies. People are often ready for more complexity and have a greater appreciation of the intertwined forces shaping society after engaging with such scenarios. To illustrate this, we describe a recent project we directed aimed at helping decisionmakers think through the implications of emerging nanoscale science, technology, and innovation for cities.

Constructing scenarios

Sustainability science develops solution options for complex problems with social, economic, and environmental elements, reaching from local to global scales. Design thinking synthesizes information from disparate sources to arrive at design concepts that help solve such complex problems and advance human aspirations, from the scale of the body to the scale of the city. In this project we used both sustainability science and design thinking to map, model, and visualize alternative socio-technical futures that respond to the mounting sustainability challenges facing Phoenix, Arizona.

Currently, science policy in the United States and across the globe is justifying significant investments in nanotechnology by promising, for example, improved public health, water quality, food productivity, public safety, and transportation efficiency. In Phoenix, regional efforts are under way in each of these sectors. The nanotechnologies envisioned by researchers, investors, and entrepreneurs promise to reshape the buildings, infrastructures, and networks that affect the lives of the city’s residents. Furthermore, Phoenix, like many urban centers, is committed to diversifying the regional economy through investments in high-tech clusters and recruiting research-intensive companies. It is already home to companies such as Intel, Honeywell, Orbital Sciences, and Translational Genomics. These companies promise jobs, economic growth, and the benefits of novel technologies to make life easier, not only for Phoenix residents but for consumers everywhere.

We consulted with diverse stakeholders including “promoters” (such as entrepreneurs, funding agencies, staffers, and consultants), less enthusiastic “cautious optimists” (members of the media, city officials, and investors), and downright “skeptics” (staff at social justice organizations, regulatory agencies, and insurance companies). These urban stakeholders have rival objectives and values that highlight the interwoven and competing interests affecting the city’s social, technological, and environmental characteristics. Repeated interactions between the research team and stakeholders led to relationships that were maintained for the duration of the two-year study.

A mixed method to foresight

In collaboration with these diverse stakeholders, the scenarios explore the following questions: In Phoenix in 2050, who is doing what in nanotechnology innovation, why are they doing it, and with what outcomes (intended and unintended)? How conducive are different models of nanotechnology innovation to mitigating the sustainability challenges Phoenix faces in 2050? We used 2050 as the reference year because it is beyond the near-term planning horizon, yet still within the horizon of responsibility to today’s children.

In the initial stages of research, we collected elements for the scenarios directly from stakeholders through interviews, workshops, local media reports, and public events, and from documents published by academic, industry, government, and nonprofit organizations. That review process yielded a set of scenario elements (variables) in four relevant domains of models of innovation, societal drivers, nanotechnology applications, and sustainability challenges.

(1) Models of innovation represent distinctly different patterns of technological change: market-pull innovation is the conventional procedure of product development and commercialization; social entrepreneurship innovation aligns the interests of private entrepreneurs with the challenges facing society through diverse public-private partnerships; closed collaboration innovation is based on public-private partnerships restricted to a limited number of elite decisionmakers; and open-source innovation leverages the skills of individuals and collectives to generate intellectual property and yet not retain its rights exclusively.

(2) Societal drivers enable and constrain people’s actions in the innovation process: entrepreneurial attitudes; public (and private) funding; academic capacities; risk-mitigating regulations (public policy) and liability protection (private activity); and capacity for civic engagement.

(3) Nanotechnology applications result from the innovation process and range from “blue sky” (very early development) to “ubiquitously available.” The applications used in our study include multifunctional surface coatings; energy production, transmission, and storage systems; urban security applications; and nano-enhanced construction materials. All applications are profiled in an online database (http://nice.asu.edu).

(4) Sustainability challenges—mitigated or aggravated through innovation processes—include economic instabilities due to boom-bust cycles of land development and consumer behavior; emerging problems with the reliability of electricity and water systems due to population shifts, aging infrastructure, and future drought conditions; overinvestment in energy- and emission-intense automobile transportation infrastructure; increasing rates of childhood obesity and other behavioral diseases; social fragmentation along socioeconomic and nationality status; and limited investments and poor performance in public education. The Phoenix region faces each of these challenges today. How (or if) they are addressed will affect the city’s future.

We vetted this set of scenario elements through interviews and a workshop that included a total of 50 experts in high-risk insurance, venture capital, media, urban economic development, regulations, patent law and technology transfer, nanoscale science and engineering, and sustainability challenges. We analyzed the consistency among all scenario elements, and generated 226,748,160 computer-based combinations of the scenario elements. Inconsistent scenarios were eliminated and a cluster analysis yielded a final set of four scenarios (based on the four innovation models). Technical descriptions summarized the key features of each scenario. Finally, a narrative was written for each scenario (such as the one for the open-source innovation scenario at the beginning of this article). Each narrative starts at sunrise to depict a day in the life of a person in Phoenix in 2050.

The narratives were used as the basis for a graduate course that we taught at Arizona State University’s Design School. Students were asked to develop urban designs from the scenario narratives. The challenge for the students was that the narratives were neither architectural design specifications nor articulations of typical design problems. One student joked, “We are working with material too small to see, in a future that doesn’t exist, at a physical scale bigger than any other design studio project.” (In contrast, the graduate design studio next door was designing a 10-story law school for an existing site in downtown Phoenix.)

Students first converted the scenario narratives into visual storyboards, from which they developed initial urban design proposals. The proposals were reviewed by a panel of experts, including engineers, real estate developers, social scientists, and community advocates. Students formulated suppositions, for example, in the social entrepreneurship innovation scenario, that boundaries between public and private property are blurred, or, in the open-source innovation scenario, that restrictive building codes are eased exclusively for Creators in exchange for the benefits offered to the city. The suppositions served as a point of departure for the final urban design proposals. Ideas poured forth throughout the process, as students generated thousands of sketches, drawings, and illustrative boards to test their urban design proposals.

Each student dedicated 60 or more hours per week to the project. In turn, the Design School offered abundant technical and social resources to enable their productivity. Students were given a budget to build their lab and create an environment suitable for the project. Every Friday they participated in group coaching led by a clinical psychologist, a faculty member at the Design School. A filmmaker worked with the students on illustrating the final urban design proposals in short videos.

By the end of the semester, the students had created four videos—one for each scenario—offering a guided tour of a nano-enhanced Phoenix in 2050. The videos were reviewed by a panel of experts, including land developers, technology specialists, architects, sustainability scholars, urban designers, and social scientists. Over the summer, a group of six students incorporated feedback from the end-of-semester review and condensed the four scenarios into two. They produced three-dimensional models and polished the final video, entitled PHX 2050 (http://vimeo.com/88092568). The 15-minute video exposes audiences to distinctly different futures of nanotechnology in the city—from drivers to impacts. It has been used in high-school classrooms in Phoenix; science policy workshops in Washington, DC; and seminars, including one hosted by the U.S. Green Building Council with professionals from the construction sector. The video sparked new conversations and stimulated people to consider, simultaneously, the social and physical elements of the city, the role of technology, and divergent future outcomes.

The nano-enhanced city of the future: Phoenix in 2050

In addition to the movie and the four “day-in-the-life” vignettes, the students prepared graphic images that visually capture the essence of the scenarios and general descriptions of the key underlying elements. Samples of each of these are provided here.

Market-driven innovation: Suppositions

“Market pull” is the dominant mode of innovation and problem-solving to meet user demands. Market mechanisms efficiently meet the demand for low-cost goods, such as personal electronics, provided by private corporations and entrepreneurs alike. Product competition affords comfort and convenience-based products that ensure the “good life.”

Citizens hope to become wealthy and famous entrepreneurs. Government funding agencies focus on small business research grants, as a means to privatize and market technologies created in university and federal labs. Venture capitalists host regional and national conferences and invite researchers, budding entrepreneurs, and program managers. These forums offer critical feedback to technology developers and funding agencies on how to get technologies closer to market before private investments are made.

Advances in nanotechnology support legacy energy and transportation infrastructure, which gain just enough efficiency to stave off the collapse of aging infrastructure. Battery efficiency allows cars to run exclusively on electric motors, yet the existing electrical power supply remains fossil dependent. Nano-enabled materials coat the glass facade and are embedded in the electrical operations in buildings.

Society is divided between the rich and the minimum wage earner, with the middle class having disappeared decades ago. Pressing urban sustainability challenges amplify stress between people, the economy, and the environment.

85

Market-driven innovation: Will the sun rise in Arizona?

Rays of sunlight break across Nancy’s bed. The window’s tinting melts away as the night’s sky transforms into a grayish-purple aurora in anticipation of morning. Nancy awakens. Another day to fight for solar energy has begun and the aroma of freshly brewed coffee greets her. She sips her coffee and reviews her notes for the upcoming 2050 Arizona Town Hall. She scoffs. These meetings have been going on for more than a half-century, since before 2010.

And where are they today? No different than 2010, maybe a notch hotter at night and water restrictions are being imposed, but the real lack of change is in the energy sector, the lifeblood of any city. The market price of solar has never quite caught up with the marginally decreasing price of nuclear, coal, and natural gas. There are a hundred reasons, a thousand little incremental changes in technology and policy that have advantaged legacy energy providers and continuously crippled the solar industry. Many point to the little-known Arizona Corporation Commission—the decision-making body that sets Renewable Energy Standards for state-regulated electrical utilities in Arizona, a state with 360 days of full sun every year. A political action group has supported candidates who have undermined the solar industry and quietly propped up the legacy energy sources relied on by the centralized utilities.

86

Closed collaboration: A world under control

Ja’Qra awakes to the morning rays gently easing their way through the blinds. The “Desert Sunrise” is programmed into the Home Intelligence System, which syncs every second with the Community Health Management system. Those systems are responsible for Ja’Qra’s health and security. The systems update the Maricopa Sheriff’s office every two seconds, ensuring almost real-time security updates. Since the Arizonians for Citizen Transparency Act came into effect in 2024, all children have been encoded with their social security numbers embedded within eighty-one discrete codons using synthetic G-A-C-T sequences. Ja’Qra validates her status as awake. Her routine is soothing. She depresses her hands in a semi-solid gel that fills the bathroom sink monitoring station. It massages her hands, lightly scrubs the skin, and applies a novel daily nail polish pattern and painlessly extracts 10 to 20 dead skin cells to verify Ja’Qra’s identity. A fully integrated personalized medicine program in Arizona requires full participation by all residents to populate the database of genetic diseases. Full citizen participation also provides the baseline health information from which illnesses can be identified as anomalies and treated in a preventative manner. Ja’Qra dutifully reviews the prescribed daily health reports and consumes the breakfast MEAL’ Medically Effective And Lovable.

Closed collaboration innovation: Suppositions

Mission-oriented government agencies, like the Department of Defense and National Institutes of Health, collaborate with private contractors to create novel technological solutions to social problems. By concentrating power in large administrative units, solutions are implemented with controlled technologies to address infrastructure, security, and public health challenges.

Citizens demand economic stability, security and universal health care. Clean water and air also garner unquestioned public support. A few privileged decisionmakers direct public funding for nanotechnology innovation. This ensures that highly educated experts in the field design technological solutions that align with each federal agency’s mission.

Future success is expected to mirror historic feats of science and engineering, exemplified by the atomic bomb and penicillin. Federal agencies react swiftly to identified threats and challenges. This has led to the containment of threats and has mitigated many stressors of urban life, the economy, and environment. Urban challenges are addressed with the orderly deployment of nanotechnology, such as ensuring universal health care by monitoring everyone’s health with real-time analytics and precise pharmacological treatments.

The city is reminiscent of Singapore—all clean and shiny with buildings and infrastructure protected by integrated security systems. Federal programs provide energy, water, state security, and health care. Public schools rely on memorization-style curriculum, yet are seldom capable of producing adaptive learners.

However, the narrow perspective of the homogenous decisionmakers leads to unforeseen outcomes, including the collapse of the creative class. Societal hierarchies persist as privileged families remove their children from public schools in favor of elite education institutions that enhance a child’s problem-solving skills and thus enhance their future employment opportunities.

Social entrepreneurship: How communities solve problems

Dark clouds give way to the morning’s rays. Jermaine awakes to the pungent aroma of creosote oils mixed with ozone—a smell of rain and the promise wild flowers in the Southwest. The open window lets in light, fresh air, and the sounds of friends and neighbors. Jermaine has worked late at the CORE (Collective Of Researchers and Entrepreneurs) facility yesterday. CORE helps the City of Phoenix to address the contaminated groundwater just north of the Sky Harbor Airport. The plume had been contained in the 1990s and just left there. The effects of drought in the Salt, Verde, and Colorado Rivers have prompted the city to revisit this long abandoned water reserve. Jermaine’s formal education and leadership characteristics have made him an obvious choice to lead this project. CORE is comprised of financiers, lawyers, citizens, scientists, engineers, city water planners, and a rotating set of college professors and local high school teachers. CORE takes on challenges and enters into problem-oriented competitions formally organized by federal, tribal, state, county, and city governments. Jermaine is not going to “make it big.” Then again, Jermaine didn’t study hydrogeology to get rich. Back in 2010 Jermaine heard nZVI (nanoscale Zero Valent Iron) could solve the problem, but testing stalled and nZVI was abandoned. Today, in 2050, he aims to renew decontamination efforts in Phoenix.

87

Social entrepreneurship innovation: Suppositions

Social entrepreneurship innovation attempts to bring civil society together to solve challenges. City, state, federal, and international governments work to identify problems that demand technical and social change. This practice of collectively addressing societal challenges is enabled by large-scale and continuous collaboration between different sectors of society.

Citizens and civic organizations partner with researchers to discover the root causes of persistent challenges. Strategic plans are drafted to ameliorate the symptoms, while targeting the underlying causes. The science policy agenda is attuned to directly addressing societal challenges via funding priorities and awards. Risk mitigation relies on clear roles, which are transparent to everyone. For example, cities incentivize construction firms to cut down on urban heat island effects.

Coordinated efforts in tight-knit urban neighborhoods allow pedestrians, carbon fiber bicycles, ultra-lightweight cars, trains, and buses to move along segmented streets shaded with native vegetation and overhanging building facades. Concerted efforts by citizens, city leaders, and corporate partners slowly address historical groundwater contamination, aging highways, and underinvestment in public education. The pursuit of healthy, vibrant, just, and diverse communities unites the city and its citizens.

Yet the challenge of long-term collaboration creates burnout among stakeholders. Retaining citizen buy-in and maintaining the city infrastructure are not trivial. Cultural expectations for immediacy and simplicity confront a thorough process of problem analysis, solution evaluation, and program implementation that takes decades.

Open-source innovation: Suppositions

The scenario narrative at the beginning of this article and its corresponding image depicts Phoenix in 2050 with open-source innovation as the organizing force for urban life. Individuals are incentivized through competitions that rely on problem-solving and creative-thinking skills. Public organizations and private companies both derive valuable new ideas by rewarding people with those skills.

Children and adults of all ages learn from a personalized, skills-based education system. This education model supports a competitive, creative population attuned to individual rewards. Government agencies post small daily challenges and larger collective problems on challenge boards. Individuals advance based on their ability to solve more and more “wicked” problems. Reports on the accomplishments of top-tier “Creators” bombard social media with opportunities to reap the rewards offered by public challenges. Corporate R&D relies on collective open forums that reward success and offers smaller incentives for lesser contributions such as product feedback.

There are almost no rules or restrictions on innovation. Individuals are responsible for the objects they make and release into the world. The city is awash in nanotechnological applications, built atom-by-atom with 3D printers to specified tolerances at a moment’s notice. 3D Printers are widely available, allowing people to construct most of the products they desire at home, including bicycles, cars, small airplanes, weapons, and solar panels. Individuals just need the time, materials, and understanding to make what they want.

The electrical energy grid, once thought vulnerable to solar power’s variable loading rates, no longer relies on centralized distribution of electricity. Hyperlocalized solar and geothermal energy sources are ubiquitous across the city. The aging grid slowly rusts in the desert air. Yet the city continues to experience stress. Balancing water use and natural recharge rates is still an unrealized goal.

Open-source innovation is not without societal inequities, as preoccupation with individual achievement and meritocracy enforces social hierarchies. The urban footprint expands, covering the desert with single-story residences and perpetuating the reliance on personal automobiles and highways.

Shaping innovation Scenarios need to be treated as a bundle, not in isolation: the power of scenarios is in what can be learned by comparing them. The scenarios presented here differ significantly in the role of public participation, public funding, risk mitigation, and the distribution of goods and services for the development of cities worldwide.

Public participation shapes innovation. The role the public plays in technological innovation varies across the scenarios and affects the development of the city. In the market-pull scenario, citizens are viewed as consumers of innovative technologies; public participation is limited to the later stages of innovation. Social entrepreneurship innovation offers the public opportunities to engage at key points throughout the innovation process, from problem identification to testing and ultimately implementation of solutions. Closed collaboration innovation retains power within an elite decisionmaking body, typically a government-industry partnership. The public is subjected to its decisions. Open-source innovation provides skilled people (Creators) with opportunities to reshape the city; while people without the requisite skills or desire are bystanders. The scenarios show how the public is engaged in, or subjected to, innovation, and explores the implication for urban development.

Responsiveness to societal demands by public funding agencies informs outputs. Government funding is often analyzed in terms of return on investment and knowledge creation. Levels of public investments in science, technology, and innovation are supposed to correspond to the extent of resulting public benefit. Our scenarios highlight stark differences in the relationship between investments and how outputs from those investments serve the public interest. In the market pull scenario, there is little direct connection to the public interest; success is exclusively measured by market returns, with limited regard for externalities or negative consequences. Social entrepreneurship innovation demands that government funding be highly attuned to solving problems to serve the public interest. Closed collaboration innovation prioritizes large-scale national investments to satisfy the public interest in areas such as national defense, reliable and constant electricity, and affordable health care. Such a one-size-fits-all approach does not readily adapt to challenges unique to specific geographies, so subpopulations are often overlooked. Open-source innovation attempts to address legacy issues by incentivizing talented individuals with innovation awards offered by government agencies. These are four very different ways in which the public interest is served by public investments in science, technology, and innovation.

Anticipation and risk mitigation enables innovation. Vehicles can safely travel at higher speed if mechanisms are in place to stop them before collisions occur. Investors (public and private) in technological innovation should explore this metaphor. Proper brakes calibrated by advances in technology assessment and with the power to halt dangerous advances could revolutionize the speed at which problems are solved. The scenarios each address risk in different ways. Market-pull innovation addresses risks reactively. Negative effects on people and the environment are identified after the problems are observed and deemed unacceptable. This is like driving forward while looking in the rearview mirror. Social entrepreneurship innovation attempts to delineate clear and transparent roles for risk mitigation. Potential solutions are tested iteratively as a means to anticipate foreseeable risks and assess outcomes before full-scale implementation. This approach is slow and methodical. Closed collaboration innovation takes known hazards (such as terrorism or climate change) as the starting point and attempts to mitigate the risks through innovation, but seems to lack the adaptability to address future outcomes. Open-source innovation presupposes that the Creators are responsible for their own actions. This assumption links risk mitigation to the individual, and thus to each Creator’s capacity to foresee the outcomes of the technology she or he creates. These risk mitigation and adaption approaches are not the same as the four models of innovation, but the connections were strongly consistent throughout the scenario development process. Innovation policy needs to address risk mitigation not as slowing down progress, but as a means to allow faster development if proper brakes are in place to halt dangerous developments.

Distribution: Pathways to realize innovation benefits. The benefits of innovation vary from personal consumer products (well suited to market pull with high levels of competition) to universal goods such as water that are delivered through large-scale infrastructure (well suited to closed collaboration). Social entrepreneurship innovation delivers nanotechnologies to address societal challenges that lend themselves to a technological solution. Closed collaboration innovation is primarily organized to integrate nanotechnology into large systems, especially if the technology increases system control and efficiency. Thus, public infrastructures, such as traffic sensors, electricity monitoring and distribution networks, and large public health data systems would be amenable to a closed collaboration approach. Open-source innovation provides benefits personalized by the needs of the creator. Programmable machines that print 3D structures and functional objects could make nanotechnology ubiquitous for the creator class. The public interest is well served by a diversity of delivery mechanisms for different products and services. An overreliance on a single mechanism such as open-source innovation will prove ineffective in delivering goods and services to society.

Integrated foresight Albert Einstein’s oft-quoted aphorism, “We can’t solve problems by using the same kind of thinking we used when we created them,” calls out the need for alternative innovation models. Each scenario depicts a range of outcomes that reflect a connection between the mode of innovation and society’s ability to address its urban sustainability challenges. The market-pull scenario explores the implications of focusing singularly on economic development. This seems to perpetuate negative externalities, including the continued segregation of socioeconomic classes and dependence on carbon-intensive transportation and energy systems. Social entrepreneurship innovation takes sustainability challenges as its starting point and solves problems collaboratively, albeit slowly. It relies on social and behavioral changes as well as technological solutions. Closed collaboration innovation addresses urban sustainability challenges through the centralized management of infrastructure. Open-source innovation addresses certain urban sustainability challenges through the collective efforts of skilled individuals, while other challenges remain unaddressed or worsen. As a set, the four scenarios allow decisionmakers to appreciate the benefits and challenges associated with each innovation approach—and the need for diverse strategies to apply emerging technologies to the design of our cities.

Our integrated approach to foresight, with its strong connections to places and people, suggests changes in science, technology, and innovation policy. Can the scenarios trigger any of those changes? We have presented them in a variety of settings from high school and university classrooms to academic conferences. The film has been used in deliberation among professionals and policymakers. To date, however, there is no evidence that the scenarios are leading to constructive strategy-building exercises that shape science, technology, and innovation policies toward a sustainable future for Phoenix. Nevertheless, our efforts have led to reflections among stakeholders and afforded them the opportunity to consider value-laden questions such as: What future does our society want to create? This project was not commissioned directly by policy or business stakeholders. Therefore, the primary outcomes may well rest in the newly developed capacities of the design students, stakeholder partners, and faculty to consider the complex yet often invisible interconnections between our technological future and the choices that we make at every level of society. Our hope is that such insights will influence the way the project participants pursue their professional efforts and careers, and with this contribute to innovation processes that yield sustainable outcomes for cities around the world.

Recommended readings

R. W. Foley and A. Wiek, “Patterns of Nanotechnology Innovation and Governance within a Metropolitan Area,” Technology in Society 35, no. 4 (2014): 233-247.

A. Wiek and R. W. Foley, “The Shiny City and Its Dark Secrets: Nanotechnology and Urban Development,” Curb Magazine 4, no. 3 (2013): 26–27.

A. Wiek, R. W. Foley, and D. H. Guston, “Nanotechnology for Sustainability: What Does Nanotechnology Offer to Address Complex Sustainability Problems?” Journal of Nanoparticle Research 14 (2012): 1093.

A. Wiek, D. H. Guston, S. van der Leeuw, C. Selin, and P. Shapira, “Nanotechnology in the City: Sustainability Challenges and Anticipatory Governance,” Journal of Urban Technology 20, no. 2 (2013): 45–62.

Rider W. Foley ([email protected]) is an assistant professor in the Engineering and Society Department at School of Engineering and Applied Science at the University of Virginia and affiliated with the Center for Nanotechnology in Society, Consortium for Science, Policy, and Outcomes at Arizona State University. Darren Petrucci is a professor at the School of Design at Arizona State University. Arnim Wiek is an associate professor at the School of Sustainability and affiliated with the Center for Nanotechnology in Society, Consortium for Science, Policy, and Outcomes at Arizona State University.

Exposing Fracking to Sunlight

The public needs access to reliable information about the effects of unconventional oil and gas development in order for it to trust that local communities’ concerns won’t be ignored in favor of national and global interests.

The recent expansion of oil and natural gas extraction from shale and other tight geological formations—so-called unconventional oil and gas resources—has marked one of the most significant changes to the U.S. and global economy so far in the 21st century. In the past decade, U.S. production of natural gas from shale has increased more than 10-fold and production of “tight oil” from shale has grown 16-fold. As a result, natural gas wholesale prices have declined, making gas-fired power plants far more competitive than other fuel sources such as coal and nuclear power.

Oil and gas extraction enabled by hydraulic fracturing has contributed to a switch away from coal to natural gas in the U.S. power sector. Although that switch has been an important driver for reducing U.S. carbon emissions during combustion for electricity generation and industrial processes, carbon emissions from natural gas do contribute substantially to global warming. Thus, from a climate standpoint, natural gas is less attractive than lower- and zero-carbon alternatives, such as greater energy efficiency and switching to renewable energy. In addition, the drilling, extraction, and transportation through pipelines of oil and natural gas results in the leakage of methane, a potent greenhouse gas that is 25 times stronger than carbon dioxide.

Domestic energy demand and supply changes are also beginning to shift U.S. geopolitical dynamics with large fossil fuel producers such as Russia and the Middle Eastern states. Although much of the rhetoric—including a significant industry advertising campaign by U.S. gas producers—focuses on the benefits to the nation of a domestic supply of energy, natural gas and oil produced in the United States are part of a global marketplace. For example, just a few years ago, terminals were being built both onshore and offshore in U.S. waters to import liquefied natural gas (LNG) for energy in the New England region. Now a major public policy debate is under way about whether the United States should export natural gas. As a consequence some of these same terminals are being dismantled and others may be redeveloped for the export of LNG.

Meanwhile, competing desires for less-expensive energy and associated chemical raw materials for plastics, iron, and steel products manufacturing in the United States has created political pushback against allowing exports. But with uncertainty about supply from Russia for major markets in Europe due to political turmoil, and rapidly growing energy needs in China and India, among others, upward price pressure on natural gas as well as oil is almost certain to follow, keeping the debate on the geopolitics of the issues alive in the days to come.

What is certain though is that a consistent supply of domestic energy, and derived chemicals that serve as raw materials feedstock for manufacturing, will support a 20th-century–style economy with fossil fuels as its base. But what does that mean for the development of renewable energy sources, or alternatives to plastics, industrial chemicals, or natural resources in the United States? What does large-scale investment in these resources mean for our mitigation of carbon emissions and adaptation to climate change impacts?

At the same time that much of the attention is focused on these national and global implications, it can be forgotten that considerable uncertainty persists about the local implications of fracking for communities and the environment. Whereas the larger-scale global questions may be harder to answer, the proper application of federal, state, and local laws and better public information can go a long way toward answering critical questions on the local level.

Examining production

Despite the rapid pace of development of unconventional oil and gas resources enabled by fracking across the United States, and its influence on domestic and international energy markets, there is remarkably little independent information available to the public on the effects, both positive and negative, of such an undertaking. And because fuller analysis to answer these questions is not available, the American people and their elected representatives have not had a chance to make informed choices about whether and how unconventional oil and gas development occurs.

This is, in part, due to the lack of comprehensive regulation of unconventional oil and gas development at the federal level. Because the oil and gas industry secured many exceptions to our major environmental laws, oversight of this new, fast-paced development has fallen primarily to the jurisdiction of the states, which often lack the resources to require and enforce data collection and sharing. So while discussion of risks and concerns associated with unconventional oil and gas development has taken place in the press, in academic literature, at federal agencies, and among various special interest and advocacy groups, such conversations have occurred largely outside of any clear, overarching policy framework.

At the same time, concerted actions by industry severely limit regulation and disclosure, which has left citizens, communities, and policymakers without access to information on the full range of consequences of shale resource development in order to make fact-based decisions. Compounding this problem is the fact that much of the scientific discourse on the technical dimensions of unconventional oil and gas development, including the engineering of fuel extraction, production, transportation, refining and waste disposal, not to mention the economic, environmental, and social impacts, has failed to adequately inform the public conversation.

In the absence of comprehensive and credible information, readily available to the public, conversations and decisions on unconventional oil and gas development in the United States have been marred by an extremely polarized debate over the risks, benefits, and costs of development. Development has expanded in many communities with little clear requirement for state and local jurisdictions to collect the information needed to inform the public, adequately regulate the industry, and ensure public health and safety. Worse still, most sites have been developed without baseline studies of environmental conditions before drilling and without any ongoing monitoring of changes to air and water quality during and after development, perpetuating the cycle of insufficient data collection.

Science needs to be part of the choices we make in a democratic society. In order to reach decisions with the direct involvement of the citizenry, scientific information that is independent, credible, and timely must be accessible to the public and play an important role in informing decisions.

Hydraulic fracturing involves risks that are both similar to and different from those of conventional oil and gas development (Table 1). Risks that are qualitatively different include the volume, composition, use, and disposal of water, sand, and chemicals in the hydraulic fracturing process; the size of well pads; and the scale of fracking-related development. Importantly, the advent of hydraulic fracturing and horizontal drilling has brought development to new and more-populated areas, increasing development’s intersection with communities. These factors can contribute to rapid social disruption as well as environmental damage, particularly to regions that have not previously been exposed to the oil and gas industry.

TABLE 1

76

Unfortunately, the social costs of unconventional oil and gas development have not been analyzed in nearly the same detail as the geopolitics of energy. These social costs include public health and environmental effects of fossil fuel production and the manufacturing of products enabled by this boom (Table 1). And these social costs range from local effects on communities to implications for global warming. In addition, environmental and socioeconomic concerns around oil and gas development can be different for different communities. For example, western states and localities tend to be more concerned about effects on water availability, whereas eastern states and localities tend to focus more on the impact on water quality. Communities with existing oil and gas facilities may worry about expanded development, whereas those that have not previously hosted the industry are often concerned about potential new environmental and socioeconomic effects, such as strain on public services, new pipelines, and heavy truck traffic.

Because the data on these effects are either lacking or incomplete, at least some states (e.g., Maryland, New York, and California) and localities have responded by enacting moratoriums or outright bans on development. Fixed-duration moratoriums are usually intended to allow time for either the assessment of environmental and public health impacts or for the formulation of an adequate regulatory structure for development. To mitigate many of the risks associated with unconventional oil and gas development, there is a fundamental need for comprehensive baseline analysis followed by monitoring of effects. The resultant information must be publicly available to the greatest extent practicable, so that citizens and elected officials have open access to the scientific information in order to decide if and how to regulate development in their communities.

The government role

Given the dramatic impact of unconventional oil and gas development on the U.S. economy, energy future, and industrialization of rural landscapes, it is more than a little surprising that there is no comprehensive governance system in place to safeguard the public trust and to facilitate information collection and sharing. As development has proceeded, there has been a concerted push by industry to reduce the federal government’s role in management and relegate any regulatory oversight to the state level. This push has resulted in a long list of special exemptions for the oil and gas industry from existing major environmental federal laws (Table 2).

TABLE 2

77

Importantly, public trust is not just a concern for politicians or affected communities but must also be earned by industry. Greater trust benefits companies by building a better relationship with the communities where they operate.

Despite this failure to manage the impacts of unconventional oil and gas production, agencies like the Environmental Protection Agency (EPA) have in the past been effective at environmental regulation. Federal environmental laws and the accompanying regulatory systems for most types of industrial development are well articulated. They are largely implemented by the states with federal support and oversight, and most importantly have resulted in major improvements in the quality of air and water, toxic waste cleanups, and public health over the past half century. In addition to setting national standards for many industrial activities, the U.S. system of environmental laws provides extensive opportunity for informing the public and seeking their input to the policymaking process. This open process certainly requires time and effort and entails some cost, but citizens in a democratic society have a right to be informed and to voice their views. And the government, as well as industry, has an obligation to listen and be as responsive as possible.

Although states have environmental protection statutes that are often in parallel to the federal mandates, there is substantial inconsistency in their application and often a limited capability at the state level to assess, monitor, and enforce requirements. State regulation often relegates public input to notice and comment on permit applications. Public meetings may or may not be required. There is no clear requirement for alternatives to be considered, nor for a broader analysis of public health or environmental effects as there would be under federal authority. Therefore, exemptions from key federal statutes such as the Clean Air Act, Clean Water Act, and CERCLA (Superfund) for oil and gas development are a major concern. They also result in inconsistency in standards and management, lack of coordination with federal agencies, and the loss of basic protections for the public, including the opportunity to have greater levels of input. Together, all of these legal exemptions limit the gathering of critical scientific information on the effects of fracking on air and water quality, and consequently undermine public trust.

Earning public trust

In July 2013, the Center for Science and Democracy at the Union of Concerned Scientists held a forum in Los Angeles on Science, Democracy, and Community Decisions on Fracking. The forum brought together a diverse collection of stakeholders, including scientists, policy specialists, industry, local government officials, and community groups. One of the oft-repeated points during the forum was the importance of communities developing trust in both industry and government. Community stakeholders who participated expressed the need to be included in the process, for their voices and concerns to be heard, and for their health and well-being to be considered a priority.

Open access to scientific information can help earn the public trust. Unfortunately, efforts to manipulate or otherwise impede the information flow to both the public and the scientific community have significantly undermined the public’s trust that risks are being minimized and competently managed. These efforts include the failure to fully disclose the chemicals used in fracking, the blocking of access to drilling sites for independent scientists, the lack of disclosure of industry involvement in academic studies of fracking, and legal settlements that prevent the release of industry-collected data. In fact, too many cases in which incidents of pollution or other problems have occurred have been met by concerted efforts by industry to quickly contain the information, block access to well sites, and impose legal confidentiality requirements as part of compensation for losses. The resulting lack of access to information makes it more difficult to document cases of air and water contamination and develop risk reduction strategies, further diminishing public trust in industry and government.

An integrated system of data collection, baseline testing, monitoring, and reporting is needed in order for scientists and decisionmakers to better understand and manage risks. The coordination and provisioning of such comprehensive data in a format that is easily available and accessible to health care and emergency workers as well as the affected communities are equally desirable.

Importantly, public trust is not just a concern for politicians or affected communities but must also be earned by industry. Greater trust benefits companies by building a better relationship with the communities where they operate. An open and responsive company has the potential to gain greater public support and mitigate future risks to business. Instead of pushing back against regulatory controls, the oil and gas industry can gain greater consistency and certainty by allowing the already well-developed system of federal laws to follow their charge of protecting public health and the environment. Part of the value of these laws is that they level the playing field so that all businesses work to the same standards. Working with, rather than against, the system of governance will result in greater sustainability of the industry itself and help mitigate against the fact that even a single accident or bad actor can cause a public and regulatory backlash against the entire industry.

To overcome the gridlock and suspicions in public conversations on fracking, decisionmakers should immediately enact federal policies that would require states to implement comprehensive baseline analysis and monitoring programs for air and water at all well sites. The collected information must be made publicly available and accessible to provide communities with trustworthy information about environmental quality and potential impacts on public health. This need is so fundamental that any delay will continue to add to the ill will toward and distrust of corporate actors. Plus, the costs of such programs are relatively modest as compared to either societal costs or industry profits.

In the important discussion of the national and global political, economic, and climate implications of fracking, we should not forget the need to understand and address its local impacts. Given the potential costs and benefits of unconventional oil and gas resources development on the world and the United States, debates over the proper course for energy development will certainly continue. But comprehensive and independent air and water quality data collection, before, during, and after fracking, made publicly accessible, along with a governance structure for monitoring, enforcement, and managing risks, will go a long way in informing the debate, building public trust, and securing better outcomes for industry and our democratic system.

Recommended reading

Energy Information Administration (EIA), Annual Energy Outlook 2013 with Projections to 2040 (Washington, DC: U.S. Department of Energy, 2013) available online at www.eia.gov/forecasts/aeo/pdf/0383%282013%29.pdf

IHS, America’s New Energy Future: The Unconventional Oil and Gas Revolution and the U.S. Economy, Vol. 3: A Manufacturing Renaissance—Executive Summary (Englewood, CO: IHS, 2013).

M. Levi, The Power Surge: Energy, Opportunity, and the Battle for America’s Energy Future (Oxford, UK: Oxford University Press, 2013).

R.V. Percival, C. H. Schroeder, A. S. Miller, and J. P. Leape, Environmental Regulation: Law, Science and Policy (New York, NY: Aspen Publishers, 2003).

Resources for the Future, State of State Shale Gas Regulation (Washington, DC: Resources for the Future, 2013); available online at www.rff.org/rff/documents/RFF-Rpt-StateofStateRegs_Report.pdf

Union of Concerned Scientists, Toward An Evidence-based Fracking Debate: Science, Democracy, and Community Right to Know in Unconventional Oil and Gas Development (Cambridge, MA: UCS, 2013); available online at www.ucsusa.org/assets/documents/center-for-science-and-democracy/fracking-report-full.pdf

Union of Concerned Scientists, Gas Ceiling: Assessing the Climate Risks of An Overreliance on Natural Gas (Cambridge, MA: UCS, 2013); available online at www.ucsusa.org/assets/documents/clean_energy/climate-risks-natural-gas.pdf

Andrew A. Rosenberg ([email protected]) is director, Pallavi Phartiyal is program manager and senior analyst, and Gretchen Goldman is lead analyst at the Center for Science and Democracy, Union of Concerned Scientists, Cambridge, MA. Lewis M. Branscomb is professor emeritus of public policy and corporate management at Harvard University’s Kennedy School of Government, and adjunct professor at the University of California, San Diego, in the School of International Relations and Pacific Studies.

Streamlining the Visa and Immigration Systems for Scientists and Engineers

Alena Shkumatava leads a research group at the Curie Institute in Paris studying how an unusual class of genetic material called noncoding RNA affects embryonic development, using zebrafish as a model system. She began this promising line of research as a postdoctoral fellow at the Massachusetts Institute of Technology’s Whitehead Institute. She might still be pursuing it there or at another institution in the United States had it not been for her desire to visit her family in Belarus in late 2008. What should have been a short and routine trip “turned into a three-month nightmare of bureaucratic snafus, lost documents and frustrating encounters with embassy employees,” she told the New York Times. Discouraged by the difficulties she encountered in leaving and reentering the United States, she left MIT at the end of her appointment to take a position at the Curie Institute.

Shkumatava’s experience, along with numerous variations, has become increasingly familiar—and troublesome for the nation. For the past 60 years, the United States has been a magnet for top science and engineering talent from every corner of the world. The contributions of hundreds of thousands of international students and immigrants have helped the country build a uniquely powerful, productive, and creative science and technology enterprise that leads the world in many fields and is responsible for much of the growth of the U.S. economy and the creation of millions of high-value jobs. A few statistics suggest just how important foreign-born talent is to U.S. science and technology:

But the world is changing. The United States today is in a worldwide competition for the best scientific and engineering talent. Countries that were minor players in science and technology a few years ago are rapidly entering the major leagues and actively pursuing scientific and technical talent in the global marketplace. The advent of rapid and inexpensive global communication and air travel that is within easy reach of researchers in many countries have fostered the growth of global networks of collaboration and are changing the way research is done. The U.S. visa and immigration systems need to change, too. Regulations and procedures have failed to keep pace with today’s increasingly globalized science and technology. Rather than facilitating international commerce in talent and ideas, they too often inhibit it, discouraging talented scientific visitors, students, and potential immigrants from coming to and remaining in the United States. They cost the nation the goodwill of friends and allies and the competitive advantage it could gain from their participation in the U.S. research system and from increased international collaboration in cutting-edge research efforts.

It is easy to blame the problems that foreign scientists, engineers, and STEM (science, technology, engineering, and mathematics) students encounter in navigating the U.S. visa and immigration system or the more intense scrutiny imposed on visitors and immigrants in the aftermath of 9/11. Indeed, there is no question that the reaction to the attacks of 9/11 caused serious problems for foreign students and scientific visitors and major disruptions to many universities and other scientific institutions. But many of the security-related issues have been remedied in the past several years. Yet hurdles remain, derived from a more fundamental structural mismatch between current visa and immigration policies and procedures and today’s global patterns of science and engineering education, research, and collaboration. If the United States is going to fix the visa and immigration system for scientists, engineers, and STEM students, it must address these underlying issues as well as those left over from the enhanced security regime of the post-9/11 era.

Many elements of the system need attention. Some of them involve visa categories developed years ago that do not apply easily to today’s researchers. Others derive from obsolescent immigration policies aimed at determining the true intent of foreigners seeking to enter the United States. Still others are tied to concerns about security and terrorism, both pre- and post-9/11. And many arise from the pace at which bureaucracies and legislative bodies adapt to changing circumstances. Here I offer a set of proposals to address these issues. Implementing some of the proposals would necessitate legislative action. Others could be implemented administratively. Most would not require additional resources. All are achievable without compromising U.S. security. Major components of these proposals include:

Simplify complex J-1 exchange visitor visa regulations and remove impediments to bona fide exchange. The J-1 visa is the most widely used type for visitors coming temporarily to the United States to conduct research or teach at U.S. institutions. Their stays may be as brief as a few weeks or as long as five years. The regulations governing the J-1 visa and its various subcategories, however, are complex and often pose significant problems for universities, research laboratories, and the scientific community, as illustrated by the following examples.

Implementing some of the proposals would necessitate legislative action. Others could be implemented administratively. All are achievable without compromising U.S. security.

A young German researcher, having earned a Ph.D. in civil and environmental engineering in his home country, accepted an invitation to spend 17 months as a postdoctoral associate in J-1 Research Scholar status at a prestigious U.S. research university. He subsequently returned to Germany. A year later, he applied for and was awarded a two-year fellowship from the German government to further his research. Although he had a U.S. university eager to host him for the postdoctoral fellowship, a stipulation in the J-1 exchange visitor regulations that disallows returns within 24 months prevented the university from bringing him back in the Research Scholar category. There was no other visa for such a stay, and the researcher ultimately took his talent and his fellowship elsewhere.

A tenured professor in an Asian country was granted a nine-month sabbatical, which he spent at a U.S. university, facilitated by a J-1 visa in the Professor category. He subsequently returned to his country of residence, his family, and his position. An outstanding scholar, described by a colleague as a future Nobel laureate, he was appointed a permanent visiting professor at the U.S. university the following year. Because of the J-1 regulations, however, unless he comes for periods of six months or less when he visits, he cannot return on the J-1 exchange visitor visa. And if he does return for six months or less multiple times, he must seek a new J-1 program document, be assigned a new ID number in the Student and Exchange Visitor Information System (SEVIS), pay a $180 SEVIS fee, and seek a new entry visa at a U.S. consulate before each individual visit. The current J-1 regulations also stipulate that he must be entering the United States for a new “purpose” each time, which could pose additional problems.

The J-1 is one of three visa categories used by most STEM students and professional visitors in scientific and engineering fields coming to the United States: F-1 (nonimmigrant student), J-1 (cultural or educational exchange visitor), or H-1B (temporary worker in a specialty occupation). B1/ B2 visas (visits for business, including conferences, or pleasure or a combination of the two) are also used in some instances. Each of these categories applies to a broad range of applicants. The F-1 visa, for example, is required not just for STEM students but for full-time university and college students in all fields, elementary and secondary school students, seminary students, and students in a conservatory, as well as in a language school (but not a vocational school). Similarly, the J-1 covers exchange visitors ranging from au pairs, corporate trainees, student “interns,” and camp counselors to physicians and teachers as well as professors and research scholars. Another J-1 category is for college and university students who are financed by the United States or their own governments or those participating in true “exchange” programs. The J-1 exchange visitor visa for research scholars and professors is, however, entangled in a maze of rules and regulations that impede rather than facilitate exchange.

In 2006, the maximum period of participation for J-1 exchange visitors in the Professor and Researcher categories was raised from three years to five years. That regulatory change was welcomed by the research community, in which grant funding for a research project or a foreign fellowship might exceed three years, but there was formerly no way to extend the J-1 visa of the researcher.

However, the new regulations simultaneously instituted new prohibitions on repeat exchange visitor program participation. In particular, the regulations prohibit an exchange visitor student who came to the United States to do research toward a Ph.D. (and any member of his family who accompanied him) from going home and then returning to the United States for postdoctoral training or other teaching or research in the Professor or Research Scholar category until 12 months have passed since the end of the previous J program.

A 24-month bar prohibits a former Professor or Researcher (and any member of her family who accompanied her) from engaging in another program in the Professor or Researcher category until 24 months have passed since the end date of the J-1 program. The exception to the bars is for professors or researchers who are hosted by their J program sponsor in the Short-Term Scholar category. This category has a limit of six months with no possibility of extension. The regulations governing this category indicate that such a visitor cannot participate in another stay as a Short-Term Scholar unless it is for a different purpose than the previous visit.

There are valid reasons for rules and regulations intended to prevent exchange visitors from completing one program and immediately applying for another. In other words, the rules should ensure that exchanges are really exchanges and not just a mechanism for the recruitment of temporary or permanent workers. It appears that the regulation was initially conceived to count J-1 program participation toward the five-year maximum in the aggregate. However, as written, the current regulations have had the effect of imposing the 24-month bar on visitors in the Professor and Researcher categories who have spent any period of participation (one month, seven months, or two years), most far shorter than the five-year maximum. Unless such a visitor is brought in under the Short-Term Scholar category (the category exempt from the bars) for six months or less only, the 24-month bar applies. Similarly, spouses of former J-1 exchange visitors in the Professor or Researcher categories who are also researchers in their own right and have spent any period as a J-2 “dependent” while accompanying a J-1 spouse are also barred from returning to the United States to engage in their own J-1 program as a Professor or Researcher until 24 months have passed. This applies whether or not that person worked while in the United States as a J-2. In addition, spouses subject to the two-year home residency requirement (a different, statutory bar based on a reciprocal agreement between the United States and foreign governments) cannot change to J-1 status inside the United States or seek a future J-1 program on their own.

The concept of “exchange,” born in the shadow of the Cold War, must be expanded to include the contemporary realities of worldwide collaboration.

U.S. universities are increasingly engaging in longer-term international research projects with dedicated resources from foreign governments, private industry, and international consortia, and are helping to build capacity at foreign universities, innovation centers, and tech hubs around the world. International researchers travel to the United States to consult, conduct research, observe, and teach the next generation of STEM students. The concept of “exchange,” born in the shadow of the Cold War, must be expanded to include the contemporary realities of worldwide collaboration and facilitate rather than inhibit frequent and repeat stays for varying periods.

In practice, this means rationalizing and simplifying J-1 exchange visitor regulations. Although an immigration reform bill developed in the Senate (S.744) makes several changes in the J-1 program that are primarily aimed at reducing abuses by employers who bring in international students for summer jobs, it does not address issues affecting research scholars or professors.

It may be possible, however, to make the needed changes by administrative means. In December 2008, the Department of State released a draft of revised regulations governing the J-1 exchange visitor visa with a request for comment. Included in the draft rule were changes to program administration, insurance requirements, SEVIS reporting requirements, and other proposed modifications. Although many comments were submitted, until recently there did not appear to be any movement on the provisions of most concern to the research community. However, the department is reported to have taken up the issue again, and a new version of the regulations is anticipated. This may prove to be a particularly opportune time to craft a regulatory fix to the impediments inherent in the 12- and 24-month bars.

Reconsider the requirement that STEM students demonstrate intent to return home. Under current immigration law, all persons applying for a U.S. visa are presumed to be intending to immigrate. Section 214(b) of the Immigration and Naturalization Act, which has survived unchanged since the act was passed in 1952, states, “Every alien shall be presumed to be an immigrant until he establishes to the satisfaction of the consular officer, at the time of application for admission, that he is entitled to a nonimmigrant status…”

In practice, this provision means that a person being interviewed for a nonimmigrant visa, such as a student (F-1) visa, must persuade the consular officer that he or she does not intend to remain permanently in the United States. Simply stating the intent to return home after completion of one’s educational program is not enough. The applicant must present evidence to support that assertion, generally by showing strong ties to the home country. Such evidence may include connections to family members, a bank account, a job or other steady source of income, or a house or other property. For students, especially those from developing nations, this is often not a straightforward matter, and even though U.S. consular officers are instructed to take a realistic view of these young people’s future plans and ties, many visa applicants fail to meet this subjective standard. It is not surprising, therefore, that the vast majority of visa denials, including student visas, are due to 214(b), because of failure to overcome the presumption of immigrant intent.

The Immigration and Naturalization Act was written in an era when foreign students in the United States were relatively rare. In 1954–1955, for example, according to the Institute for International Education, there were about 34,000 foreign students studying in higher education institutions in the United States. In contrast, in 2012–2013 there were more than 819,000 international students in U.S. higher education institutions, nearly two-thirds of them at doctorate-granting universities. In the early post–World War II years, the presence of foreign students was regarded as a form of international cultural exchange. Today, especially in STEM fields, foreign graduate students and postdocs make up a large and increasingly essential element of U.S. higher education. According to recent (2010) data from the National Science Foundation, over 70% of full-time graduate students (master’s and Ph.D.) in electrical engineering and 63% in computer science in U.S. universities are international students. In addition, non-U.S. citizens (not including legal permanent residents) make up a majority of graduate students nationwide in chemical, materials, and mechanical engineering.

In the sense that it prevents prospective immigrants from using student visas as a “back door” for entering the United States (that is, if permanent immigrant status is the main, but unstated, purpose of seeking a student visa), it might be argued that 214(b) is serving its intended purpose. The problem, however, is the dilemma it creates for legitimate students who must demonstrate the intent to return home despite a real and understandable uncertainty about their future plans.

Interestingly, despite the obstacles that the U.S. immigration system poses, many students, especially those who complete a Ph.D. in a STEM field, do manage to remain in the country legally after finishing their degrees. This is possible because employment-based visa categories are often available to them and permanent residence, if they qualify, is also a viable option. The regulations allow F-1 visa holders a 60-day grace period after graduation. In addition, graduating students may receive a one-year extension for what is termed Optional Practical Training (OPT), so long as they obtain a job, which may be a paying position or an unpaid internship. Those who receive a bachelor’s, master’s, or doctorate in a STEM field at a U.S. institution may be granted a one-time 17-month extension of their OPT status if they remain employed.

While on F-1 OPT status, an individual may change status to an H-1B (temporary worker) visa. Unlike the F-1 visa, the H-1B visa does allow for dual intent. This means that the holder of an H-1B visa may apply for permanent resident status—that is, a green card—if highly qualified. This path from student status to a green card, circuitous though it may be, is evidently a popular one, especially among those who receive doctorates, as is shown by the data on “stay rates” for foreign doctorate recipients from U.S. universities.

Michael G. Finn of the Oak Ridge Institute for Science and Education has long tracked stay rates of foreign citizens who receive STEM doctorates in the United States. His 2009 report (the most recent available) indicates that of 9,223 foreign nationals who received science and engineering doctorates at U.S. universities in 1999, two-thirds were still in the United States 10 years later. Indeed, among those whose degrees were in physical and life sciences, the proportion remaining in the United States was about three-quarters.

Reform of 214(b) poses something of a dilemma. Although State Department officials understandably prefer not to discuss it in these terms, they evidently value the broad discretion it provides consular officers to exclude individuals who they suspect, based on their application or demeanor, pose a serious risk of absconding and/or overstaying their visa, but without having to provide specific reasons. One might argue that it is important to give consular officers such discretion, since they are, in most cases, the only officials from either the federal government or the relevant academic institution who actually meet the applicant face-to-face.

On the other hand, 214(b) may also serve to deter many otherwise well-qualified potential students from applying, especially those from developing nations, who could become valuable assets for the United States or their home countries with a U.S. STEM education.

What is needed is a more flexible policy that provides the opportunity for qualified international students who graduate with bachelor’s, master’s, or Ph.D. STEM degrees to remain in the United States if they choose to do so without allowing the student visa to become an easy way to subvert regulations on permanent immigration. It makes no sense to try to make such distinctions by denying the fact that someone who is applying to study in the United States may be uncertain about their plans four (or more) years later.

Because 214(b) is part of the Immigration and Naturalization Act, this problem requires a legislative fix. The immigration reform bill that passed the Senate in June 2013 (S.744) contains a provision that would allow dual intent for nonimmigrant students seeking bachelor’s or graduate degrees. [The provision applies to students in all fields, not just STEM fields. A related bill under consideration in the House of Representatives (H.R.2131) provides dual intent only for STEM students. However, no action has been taken on it to date.] Some version of this approach, which provides for discretion on the part of the consular officer without forcing the student visa applicant to make a choice that he or she is not really capable of making, is a more rational way to deal with this difficult problem.

Speed up the Visas Mantis clearance process and make it more transparent. A major irritant in the visa and immigration system for scientists, engineers, and STEM students over the past decade has been the delays in visa processing for some applicants. A key reason for these delays is the security review process known as Visas Mantis, which the federal government put in place in 1998 and which applies to all categories of nonimmigrant visas. Although reforms over the past several years have eased the situation, additional reforms could further improve the process.

Initially intended to prevent transfers of sensitive technologies to hostile nations or groups, Visas Mantis was used at first in a relatively small number of cases. It gained new prominence, however, in the wake of 9/11 and the heightened concern over terrorism and homeland security that followed. The number of visa applicants in scientific and engineering fields subject to Mantis reviews took a sudden jump in 2002 and 2003, causing a logjam of applications and no end of headaches for the science, engineering, and higher education communities. The number of Mantis reviews leapt from 1,000 cases per year in 2000 to 14,000 in 2002 and an estimated 20,000 in 2003. The State Department and the other federal agencies involved were generally unprepared for the increased workload and were slow to expand their processing capacity. The result was a huge backlog of visa applications and lengthy delays for many foreign students and scientists and engineers seeking to come to the United States. The situation has improved since then, although there have been occasional slowdowns, most likely resulting from variations in workload or staffing issues.

The Mantis process is triggered when a consular officer believes that an applicant might not be eligible for a visa for reasons related to security. If the consular officer determines that security concerns exist, he or she then requests a “security advisory opinion” (SAO), a process coordinated through an office in the State Department in which a number of federal agencies review the application. (The federal government does not provide the names of the agencies involved in an SAO, but the MIT International Scholars Office lists the FBI, CIA, Drug Enforcement Agency, Department of Commerce, Office of Foreign Assets Control, the State Department Bureau of International Security and Nonproliferation, and others, which seems like a plausible list.) Consideration of the application is held up pending approval by all of the agencies. The applicant is not informed of the details of the process, only that the application is undergoing “administrative processing.”

In most cases, the decision to refer an application for an SAO is not mandatory but is a matter of judgment on the part of the consular officer. Because most consular officers do not have scientific or technical training, they generally refer to the State Department’s Technology Alert List (TAL) to determine whether an application raises security concerns. The current TAL is classified, but the 2002 version is believed to be similar and is widely available on the Internet (for example, at . It contains such obviously sensitive areas as nuclear technology and ballistic missile systems, as well as “dual-use” areas such as fermentation technology and pharmacology, the applications of which are generally regarded as benign but can also raise security concerns. According to the department’s Foreign Affairs Manual, “Officers are not expected to be versed in all the fields on the list. Rather, [they] should shoot for familiarization and listen for key words or phrases from the list in applicants’ answers to interview questions.” It is also suggested that the officers consult with the Defense and Homeland Security attachés at their station. The manual notes that an SAO “is mandatory in all cases of applicants bearing passports of or employed by states designated as state sponsors of terrorism” (currently Cuba, Iran, Sudan, and Syria) engaged in commercial or academic activities in one of the fields included in the TAL. As an aside, it is worth noting that although there are few if any students from Cuba, Sudan, and Syria in the United States, Iran is 15th among countries of origin of international students, ahead of such countries as France, Spain, and Indonesia, and a majority of Iranian students (55%) are majoring in engineering fields.

In the near-term aftermath of 9/11, there were months when the average time to clear a Mantis SAO reached nearly 80 days. Within a year, however, it had declined to less than 21 days, and more recently, despite the fact that the percentage of F-1, J-1, and H-1B applications subject to Mantis SAO processing reached 10% in 2010, according to State Department data, the average processing time is two to three weeks. Nevertheless, cases in which visas are reported to be in “administrative processing” for several months or even longer are not uncommon. In fact, the State Department tells applicants to wait at least 60 days from the date of their interview or submission of supplementary documents to inquire about the status of an application under administrative processing.

In most cases, Mantis clearances for students traveling under F visas are valid for the length of their educational programs up to four years, as long as they do not change programs. However, students from certain countries (e.g., Iran) require new clearances whenever they leave the United States and seek to reenter. Visas Mantis clearances for students and exchange visitors under J visas and temporary workers under H visas are valid for up to two years, unless the nature of their activity in the United States changes. And B visa clearances are good for a year with similar restrictions.

The lack of technical expertise among consular officers is a concern often expressed among scientists who deal with visa and immigration issues. The fact that most such officers are limited in their ability to make independent judgments (for example, on the need for a Mantis review of a researcher applying for a J-1 exchange visitor visa) may well increase the cost of processing the visa as well as lead to unnecessary delays. The National Academy of Sciences report Beyond Fortress America, released in 2009, suggested that the State Department “include expert vouching by qualified U.S. scientists in the non-immigrant visa process for well-known scholars and researchers.” This idea, attractive as it sounds to the science community, seems unlikely to be acceptable to the State Department. Although “qualified U.S. scientists” could attest to the scientific qualifications and reputations of the applicants, they would not be able to make informed judgments on potential security risks and therefore could not substitute for Mantis reviews.

An alternative that might be more acceptable would be to use scientifically trained staff within the State Department—for example, current and former American Association for the Advancement of Science (AAAS) Science and Technology Policy Fellows or Jefferson Science Fellows sponsored by the National Academies—as advisers to consular officers. Since 1980, AAAS has placed over 250 Ph.D. scientists and engineers from a wide range of backgrounds in the State Department as S&T Policy Fellows. Over 100 are still working there. In the 2013–2014 fellowship year, there were 31. In addition, there were 13 Jefferson Science Fellows—tenured senior faculty in science, engineering, or medicine—at the State Department or the Agency for International Development, a number that has grown steadily each year since the program was started in 2004. These highly qualified individuals, a few of whom are already stationed at embassies and consulates, should be available on an occasional basis to augment consular officers’ resources. They, and other Foreign Service Officers with technical backgrounds, would be especially useful in countries that send large numbers of STEM students and visitors to the United States, such as China, India, and South Korea.

Measures that enhance the capacity of the State Department to make technical judgments could be implemented administratively, without the need for legislative action. A policy that would limit the time available for the agencies involved in an SAO to review an application could also be helpful. Improving the transparency of the Mantis process poses a dilemma. If a visa applicant poses a potential security risk, the government can hardly be expected to inform the applicant about the details of the review process. Nevertheless, since the vast majority of Mantis reviews result in clearing the applicant, it might be beneficial to both the applicant and the government to provide periodic updates on the status of the review without providing details, making the process at least seem a little less Kafkaesque.

Allow scientists and scholars to apply to renew their visas in the United States. Many students, scholars, and scientists are in the United States on long-term programs of study, research, or teaching that may keep them in the country beyond the period of validity of their visas. Although U.S. Citizenship and Immigration Services (USCIS) is able to extend immigration status as necessary to cover these programs, approval of status extension from USCIS is not the same thing as a valid visa that would enable international travel. Often, due to the need to attend international conferences, attend to personal business, or just visit family, students and scholars can find themselves in a situation where they have temporarily departed the United States but are unable to return without extensive delays for processing a visa renewal abroad. As consular sections may be uncomfortable positively adjudicating visa applications for those outside of their home country, it is not uncommon for applicants to be asked to travel from a third country back to their country of origin for visa processing, resulting in even greater expense and delay.

Until June 2004, the Department of State allowed many holders of E, H1-B, L, and O visas to apply for visa renewal by mail. This program was discontinued in the wake of 9/11 because of a mixture of concerns over security, resource availability, and the implementation of the then-new biometric visa program. Now, however, every nonimmigrant visa holder in the United States has already had electronic fingerprints collected as part of their visa record. Security screening measures have been greatly improved in the past decade. In addition, the Omnibus Spending Bill passed in early 2014 included language directing the State Department to implement a pilot program for the use of videoconferencing technology to conduct visa interviews. The time is right to not only reinstitute the practice of allowing applications for visa renewal inside the United States for those categories previously allowed, but also to expand the pool of those eligible for domestic renewal to include F-1 students and J-1 academic exchanges.

What is needed is a more flexible policy that provides the opportunity for qualified international students to remain in the United States without allowing the student visa to become an easy way to subvert regulations on permanent immigration.

Reform the H-1B visa to distinguish R&D scientists and engineers from IT outsourcers. Discussion of scientists, engineers, and STEM students has received relatively little attention in the current debate on immigration policy, with one significant exception: the H-1B visa category. This category covers temporary workers in specialty occupations, including scientists and engineers in R&D (as well as, interestingly enough, fashion models of “distinguished merit and ability”). An H-1B visa is valid for three years, extendable for another three. The program is capped at 65,000 each fiscal year, but an additional 20,000 foreign nationals with advanced degrees from U.S. universities are exempt from this ceiling, and all H-1B visa holders who work at universities and university- and government-affiliated nonprofits, including national laboratories are also exempt.

Controversy has swirled about the H-1B program for the past several years as advocates of the program, citing shortages of domestic talent in several fields, have sought to expand it, while critics, denying the existence of shortages, express concern instead about unemployment and underemployment among domestically trained technical personnel and have fought expansion. Moreover, although the H-1B visa is often discussed as if it were a means of strengthening U.S. innovation by bringing more scientists and engineers to the United States or retaining foreign scientists and engineers who have gained a degree in this country, the program increasingly seems to serve a rather different purpose. Currently, the overwhelming majority of H-1B recipients work in computer programming, software, and IT. In fact, the top H-1B visa job title submitted by U.S. employers in fiscal 2013 was programmer analyst, followed by software engineer, computer programmer, and systems analyst. At least 21 of the top 50 job titles were in the fields of computer programming, software development, and related areas. The top three company sponsors of H-1B visa recipients were IT firms (Infosys Limited, Wipro, and Tata Consultancy Services, all based in India) as were a majority of the top 25. Many of these firms provide outsourcing of IT capabilities to U.S. firms with foreign (mainly Indian) staff working under H-1Bs. This practice has come under increasing scrutiny recently as the largest H-1B sponsor, Infosys, paid a record $34 million to settle claims of visa abuse brought by the federal government. Visa abuse aside, it is difficult to see how these firms and the H-1B recipients they sponsor contribute to strengthening innovation in the United States.

Reform of the H-1B program has been proposed for years, and although little action has been taken so far, this may change soon as the program is under active discussion as part of the current immigration debate. Modifications included in the Senate bill (S.744) would affect several important provisions of the program. The annual cap on H-1B visas would be increased from 65,000 to a minimum of 115,000, which could be raised to 180,000. The exemption for advanced degree graduates would be increased from 20,000 to 25,000 and would be limited to STEM graduates only. Even more important, the bill would create a new merit-based point system for awarding permanent residency permits (green cards). Under it, applicants would receive points for education, the number increasing from bachelor’s to doctoral degrees. Although there would be a quota for these green cards, advanced degree recipients from U.S. universities would be exempt, provided the recipient received his or her degree from an institution with a Carnegie classification of “very high” or “high” research activity, has an employment offer from a U.S. employer, and received the degree no more than five years before applying. This would be tantamount to “stapling a green card to the diploma”—terminology suggested by some advocates—and would bypass the H-1B program entirely.

The Senate bill retains the exemption of visa holders who work at universities and university- and government-affiliated nonprofits from the H-1B cap. Expanding this exemption to include all Ph.D. scientists and engineers engaged in R&D is also worth considering, although it does not appear to be part of either the Senate or the House bills. This would put Ph.D. researchers and their employers in a separate class from the firms that use the program for outsourcing of IT personnel. It would remove the issues relating to H-1B scientists and engineers from the debate over outsourcing and allow them to be discussed on their own merits—namely, their contribution to strengthening R&D and innovation in the United States.

Expand the Visa Waiver Program to additional countries. The Visa Waiver Program (VWP) allows citizens of a limited number of countries (currently 37) to travel to the United States for certain purposes without visas. Although it does not apply to students and exchange visitors under F and J visas, it does include scientists and engineers attending conferences and conventions who would otherwise travel under a B visa, as well as individuals participating in short-term training (less than 90 days) and consulting with business associates.

There is little doubt that the ability to travel without going through the visa process—application, interview, security check—greatly facilitates a visit to the United States for those eligible. The eligible countries include mainly the European Union nations plus Australia, New Zealand, South Korea, Singapore, and Taiwan. Advocates of reforming visa policy make a convincing argument that expanding the program to other countries would increase U.S. security. Edward Alden and Liam Schwartz of the Council on Foreign Relation suggest just that in a 2012 paper on modernizing the U.S. visa system. They note that travelers under the VWP are still subject to the Electronic System of Travel Authorization (ESTA), a security screening system that vets individuals planning to come to the United States with the same intelligence information that is used in visa screening. Security would be enhanced rather than diminished by expanding the VWP, they argue, because governments of the countries that participate in the program are required to share security and criminal intelligence information with the U.S. government.

Visa-free travel to conferences and for short-term professional visits by scientific and engineering researchers from the 37 countries in the VWP makes collaboration with U.S. colleagues much easier than it would otherwise be. And it would undoubtedly be welcomed by those in countries that are likely candidates for admission to the program. Complicating matters, however, is legislation that requires the Department of Homeland Security (DHS) to implement a biometric exit system (i.e., one based on taking fingerprints of visitors as they leave the country and matching them with those taken on entry) before it can expand the VWP. The federal government currently has a “biographic” system that matches names on outbound manifests provided by the airlines with passport information obtained by U.S. Customs and Border Protection on a person’s entry. A biometric exit system would provide enhanced security, but the several-billion-dollar cost and the logistics of implementing a control system pose formidable barriers. Congress and the Executive Branch have engaged in a tug of war over the planning and development of such a system for over a decade. (The Intelligence Reform and Terrorism Prevention Act of 2004 called for DHS to develop plans for accelerating implementation of such a system, but the department has missed several deadlines and stated in mid-2013 that it was intending to incorporate these plans in its budget for fiscal year 2016.) Should DHS get to the point of actually implementing a biometric exit system, it could pave the way for expanding the VWP. In the meantime, a better solution would be to decouple the two initiatives. S.744 does just that by authorizing the Secretary of Homeland Security to designate any country as a member of the VWP so long as it meets certain conditions. Expansion of the VWP is also included in the House immigration reform bill known as the JOLT Act. These are hopeful signs, although the comprehensive immigration reform logjam continues to block further action.

Action in several other areas can also help to improve the visa process. The federal government, for example, can encourage consulates to use their recently expanded authority to waive personal interviews. In response to an executive order issued by President Obama in January 2012, the State Department initiated a two-year visa interview waiver pilot program. Under the program, visa-processing posts in 28 countries were authorized to waive interviews with certain visa applicants, especially repeat visitors in a number of visa classes. Brazil and China, which have large numbers of visa applicants, were among the initial countries involved in this experimental program. U.S. consulates in India joined the program a few months later. The initiative was welcomed in these countries and regarded as successful by the Departments of State and Homeland Security. The program was made permanent in January 2014. Currently, consular officers can waive interviews for applicants for renewal of any nonimmigrant visa as long as they are applying for a visa in the same classification within 12 months of the expiration of the initial visa (48 months in some visa classes).

Although the interview waiver program was not specifically aimed at scientists, and statistics regarding their participation in the program are not available, it seems likely that they were and will continue to be among the beneficiaries now that the program has been made permanent. The initiative employs a risk-based approach, focusing more attention on individuals who are judged to be high-risk travelers and less on low-risk persons. Since it allows for considerable discretion on the part of the consulate, its ultimate value to the scientific and educational communities will depend on how that discretion is used.

The government can also step up its efforts to increase visa-processing capacity. In response to the 2012 executive order, the State Department and DHS launched an initiative to increase visa-processing capacity in high-demand countries and reduce interview wait times. In a report issued in August 2012 on progress during the first 180 days of activity under the initiative, the two agencies projected that by the end of 2012, “State will have created 50 new visa adjudicator positions in China and 60 in Brazil.” Furthermore, the State Department deployed 220 consular officers to Brazil on temporary duty and 48 to China. The consulates also increased working hours, and in Brazil they remained open on occasional Saturdays and holidays. These moves resulted in sharp decreases in processing time.

These initiatives have been bright spots in an otherwise difficult budget environment for the State Department. That budget environment, exacerbated by sequestration, increases the difficulty of making these gains permanent and extending them to consular posts in other countries with high visa demand. This is a relatively easy area to neglect, but one in which modest investments, especially in personnel and training, could significantly improve the face that the United States presents to the world, including the global scientific, engineering, and educational communities.

Looking at U.S. universities and laboratories today, one might well ask whether there really is a problem with the nation’s visa and immigration policies. After all, the diversity of nationalities among scientists, engineers, and students in U.S. scientific institutions is striking. At the National Institutes of Health, over 60% of the approximately 4,000 postdocs are neither U.S. citizens nor permanent residents. They come from China, India, Korea, and Japan, as well as Europe and many other countries around the world. The Massachusetts Institute of Technology had over 3,100 international students in 2013, about 85% of them graduate students, representing some 90 countries. The numbers are similar at Stanford, Berkeley, and other top research universities.

So how serious are the obstacles for international scientists and students who really want to come to the United States? Does the system really need to be streamlined? How urgent are the fixes that I have proposed here?

The answers to these questions lie not in the present and within the United States, but in the future and in the initiatives of the nations with which we compete and cooperate. Whereas the U.S. system creates barriers, other countries, many with R&D expenditures rising much more rapidly than in the United States, are creating incentives to attract talented scientists to their universities and laboratories. China, India, Korea, and other countries with substantial scientific diasporas have developed programs to encourage engagement with their expatriate scientists and potentially draw them back home.

In the long run, the reputations of U.S. institutions alone will not be sufficient to maintain the nation’s current advantage. The decline in enrollments among international students after 9/11 shows how visa delays and immigration restrictions can affect students and researchers. As long as the United States continues to make international travel difficult for promising young scholars such as Alena Shkumatava, it is handicapping the future of U.S. science and the participation of U.S. researchers in international collaborations. Streamlining visa and immigration policies can make a vital contribution to ensuring the continued preeminence of U.S. science and technology in a globalized world. We should not allow that preeminence to be held hostage to the nation’s inability to enact comprehensive immigration reform.

Profiteering or pragmatism?

Windfall: The Booming Business of Global Warming

by McKenzie Funk. New York, NY: Penguin Press, 2014, 310 pp.

Jason Lloyd

In the epilogue of his book, Windfall: The Booming Business of Global Warming, McKenzie Funk finally outlines an argument that had thrummed away in the background of the preceding twelve chapters, present but muffled under globe-trotting reportage and profiles of men seeking profit on a warming planet. It’s not a groundbreaking argument, but it provides a sense of Funk’s framing. “The hardest truth about climate change is that it is not equally bad for everyone,” he writes. “Some people—the rich, the northern—will find ways to thrive while others cannot … The imbalance between rich and north and poor and south—inherited from history and geography, accelerated by warming—is becoming even more entrenched.”

91

The phrasing here confuses an important distinction. Is it climate change that is exacerbating global inequities? Or is it our response to climate change? To varying degrees it is both, of course, but differentiating them is necessary because our response will have significantly greater consequences for vulnerable populations than climate change itself. Funk largely conflates the two because he views climate change and global inequalities as stemming from the same source: “The people most responsible for historic greenhouse gas emissions are also the most likely to succeed in this new reality and the least likely to feel a mortal threat from continued warming.” This is as facile a perspective as the claim on the previous page that climate change is “essentially a problem of basic physics: Add carbon, get heat.”

The problem is not that these statements are untrue. It’s that they are so simplistic that they obscure any effective way to deal with the enormous complexity of climate change and inequality. To be fair, Funk notes elsewhere that how we respond to climate change may magnify existing power and economic imbalances. But he means the response that is the subject of Windfall: people in affluent countries discovering opportunities to profit off the impacts of climate change. It does not seem to have occurred to him that the conventional climate strategy—to mitigate rather than adapt, to minimize energy consumption rather than innovate, to inhibit fossil fuel use in even the poorest countries—may entrench global inequalities much more effectively than petroleum exploration in the Arctic or genetically modified mosquitos.

It is tempting to agree with Funk’s framing. There is “a more perfect moral clarity” in the idea that the rich world must cease carbon dioxide emissions for the good of all or risk an environmental disaster that will burden the poor the most, and that those seeking financial gain in this man-made catastrophe are simply profiteers. But it is a clarity premised on an unfounded faith in our current ability to radically cut carbon emissions, and it ignores some destabilizing questions: Where does China fall in this schema, for example? What about Greenland, which as Funk notes, stands to hugely gain from climate change without having contributed anything to the problem? Don’t drought-resistant crops with higher, more predictable yields provide benefit both seed companies and poor farmers?

Funk elides these questions and stresses that he is not identifying bad guys but illuminating “the landscape in which they live,” by which he means a global society consumed by “techno-lust and hyper-individualism, conflation of growth with progress, [and] unflagging faith in unfettered markets.” If this is what he sees when he looks out at the global landscape, he is using an extraordinarily narrow beam for illumination.

Fun as it is to watch Funk puncture the petty vanities of these men, mostly by simply quoting them, it is impossible to grasp the bigger picture from these chapters.

The people that Funk spotlights in this landscape are the hedge funders, entrepreneurs, and other businessmen (apparently no women are profiting from climate change) who are finding ways to thrive on the real, perceived, and anticipated effects of global warming. These effects are divided into three categories: melt, drought, and deluge. There are upsides—for some, at least—to all three. A melting Arctic means previously inaccessible mineral and petroleum deposits become exploitable, and newly ice-free shipping lanes benefit global trade. Drought offers opportunities for investing in water rights, water being a commodity that will likely increase in price as it becomes scarcer in places like the U.S. West and Australia. And rising sea levels allow Dutch engineers to sell their expertise in water management to low-lying communities worldwide.

The issues raised in the two best chapters, about private firefighting services in California and an investor’s purchase of thousands of acres of farmland in newly independent South Sudan, are not new and arguably have less to do with climate change than with social and economic dynamics. But these chapters stand out because of the men profiled in them. Funk has a terrific eye for the vanities of a certain type of person: the good old boy who believes himself a straight-talker, rejecting social niceties and political correctness to tell it how it is, but is mostly full of hot air, pettiness, and self-interest.

The wasabi-pea-munching Chief Sam DiGiovanna, for example, leads a team of for-profit firefighters employed by insurance giant AIG to protect homes from forest fires. He calls media outlets to see if they’d like to interview him on his way to fight fires in affluent neighborhoods in the San Fernando Valley. (Their protection efforts are mostly useless, as it turns out, because of a combination of incompetence on the part of his Oregon-based dispatchers and the effectiveness of public firefighters.) It is genuinely appalling to read that because Chief Sam’s team mimics public firefighters—uniforms, red fire-emblazoned SUVs with sirens, pump trucks—a neighbor of one of their clients mistakenly believes they are in the neighborhood to fight the blaze, not protect individual client homes. As she points out where the team can access the fire, Chief Sam lamely stands around and says that more resources are coming, unwilling to abandon the illusion that they are acting in the public interest.

Funk travels with investor Phil Heilberg to South Sudan to finalize Heilberg’s leasing of a million acres of the country’s farmland, a deal that would make him one of the largest private landholders in Africa. Attempting to acquire the signatures of Sudanese officials in order to legitimate his land deal and pacify investors in the scheme, Heilberg, who compares himself to Ayn Rand’s protagonists and witlessly psychoanalyzes the warlords who keep blowing him off, seems mostly out of his element. He leaves South Sudan amid the chaos of its fight for independence without getting his signatures. Other nations pursuing land deals seem to have had more luck; countries ranging from India to Qatar have leased or purchased vast tracts of farmland in poorer countries.

Fun as it is to watch Funk puncture the petty vanities of these men, mostly by simply quoting them, it is impossible to grasp the bigger picture from these chapters. At one point Funk compares public firefighting to mitigation, or “cutting emissions for the good of all,” and Chief Sam’s private firefighting to adaptation efforts in which “individual cities or countries endeavor to protect their own patches.” (The failure of a mitigation-dominated approach to cutting global emissions goes unmentioned.) A libertarian abandonment of public goods such as firefighting would indeed be calamitous, but we don’t seem to be in any danger of that occurring. If Chief Sam’s outfit is anything more than an apparently ineffectual experiment on the part of insurance companies, Funk does not say what it is.

The same is true of his Wall Street farmland investor. Heilberg appears feckless rather than indicative of some trend of colonizing climate profiteers. Funk illustrates why working with warlords is a bad idea from both a moral and business perspective, but he never articulates what the effect of Heilberg’s farming plan, if successful, would be. Funk ominously notes that private militias had ravaged South Sudan during the civil war of the 1990s, but he doesn’t make the connection to current foreign land purchases. Heilberg, for his part, planned to farm his land and sell crops in Sudan before selling the food abroad. Nor is it obvious what countries like China or Egypt plan to do with the land they have acquired in places such as Sudan and Ethiopia, or how leasing farmland is different from other forms of foreign direct investment.

Furthermore, it’s sometimes difficult to figure out who, exactly, is profiting. Funk devotes half a chapter to Nigeria’s construction of a “Great Green Wall,” a line of trees intended to slow desertification in the country. But desertification results mostly from unsustainable agricultural methods. How climate change may impact the process is unknown, especially since climate models for sub-Saharan Africa are notably variable. Few people seem to think that the green wall will slow the Sahara’s expansion. The profit-generating capacity of a tree-planting scheme dominated by a Japanese spiritual group (one of the weirder details of the project) is left unexplained.

Geoengineering is another example. Although Intellectual Ventures (IV), an investment firm headed by Microsoft entrepreneur, cookbook writer, and alleged patent troll Nathan Myhrvold, may hold patents on speculative geoengineering technologies, how the company could profit from them is not clear. Distasteful as IV’s practices may be, is it necessarily a bad thing that some entities might profit from technologies that allow people to adapt and thrive in a climate-changed world, whether through solar radiation management, improved mosquito control, or better seawalls?

Funk clearly sees this idea and what he calls “techno-fixes” as opportunism and as relinquishing our duty to mitigate climate change through significantly cutting carbon emissions or consumption. Despite peevish asides such as the fact that the “Gates Foundation has notably spent not a penny on helping the world cut carbon emissions” (quite possibly because emissions reductions have little to do with helping poor people), Funk does not outline what radical emissions reductions would entail.

Presumably, though, an effective approach to lowering carbon emissions requires both the public and private sectors, and private sector involvement means that someone sees an opportunity to profit. The notion that corporations will respond to incentives that erode their bottom lines—or, for that matter, that governments will enact tax or energy policies to the detriment of their citizens—does not correspond to what we have learned from thirty years of failure to adequately address climate change and reduce carbon emissions. The task, then, is to rethink our strategy for transitioning to a low-carbon global society and, as importantly, equitably adapting to an unavoidably warming climate. Where are the opportunities for achieving these goals, and how do we design our strategies to benefit as many people as possible? Stuck in the conventional climate framework, Windfall does not provide any useful answers.

Funk adopts the position that he is unearthing some uncomfortable truths: “Environmental campaigners shy away from the fact that some people will see upsides to climate change.” Environmental campaigners who have chosen to ignore the blindingly obvious may indeed not want to acknowledge that climate change will produce winners and losers. But for everyone else, Funk provides a narrative of familiar villains—Royal Dutch Shell, Monsanto, Wall Street bankers, African war lords, genetically modified organisms. To those firmly entrenched in a particular view of the world, Windfall is the validating story of profit-seekers in the rich world that have brought us to the brink of environmental catastrophe and will now find a way to make money off it. If only it was this straightforward.

It is not just the rapaciousness of corporations, the selfish behavior of billions of unthinking consumers, or even the resource-intensive economies of what neo-Marxists always optimistically call “late capitalism” that is ushering in the Anthropocene. Climate change results from the fact that every facet of modern life—the necessities and comforts the vast majority of us enjoy, demand, or aspire to—contributes to the emissions that are warming the planet. If we are going to manage this condition in a pragmatic and ethical way, it will take a great deal of imagination to find the opportunities that climate change presents, including financial opportunities, for making the world a more prosperous, more resilient, and more equitable place.

Jason Lloyd ([email protected]) is a project coordinator at Arizona State University’s Consortium for Science, Policy, and Outcomes in Washington, DC.

An Archeology of Knowledge

7-01

In recent years, much thought and research has been devoted to the visualization of information and “big data.” This has fostered more interactions with artists in an attempt to uncover innovative and creative ways of presenting and interpreting complex information.

How meaning and knowledge are structured and how they are communicated through objects has long interested artist Mark Dion. In Dion’s installations, everyday objects and artifacts are elevated to an iconic status by association with other objects. Visual meaning is established in much the same way that a natural history collection might reveal information about the specimens it contains. Indeed, Dion’s work harks back to the seventeeth century “cabinet of curiosity,” where objects were collected to consider their meaning in relationship to other artifacts.

An Archaeology of Knowledge, a permanent art installation for the Brody Learning Commons, the Sheridan Libraries & University Museums, The Johns Hopkins University.

Below is a selection of artifacts from the cases and drawers: (a) “Big Road to Lake Ahmic,” 1921, etching on the underside of a tree fungus by Max Brödel (1870–1941), first director of the Department of Art as Applied to Medicine; (b) Glaucoma demonstration model, ca. 1970s; (c) Nurse dolls, undated; (d) Medical field kit, twentieth century; (e) Dog’s skull, undated; (f) Collection of miniature books, sixteenth to twentieth centuries. Photo by John Dean.

8
9-02

In 2011, Johns Hopkins University (JHU) commissioned Dion to create an installation for their Brody Learning Commons, Sheridan Libraries, and university museums in Baltimore that would convey the institution’s diverse and expansive history. The installation featured here, titled An Archeology of Knowledge, sought to document and communicate information regarding hundreds of historic artifacts, works of art, and scientific instruments from across the collections and units of JHU and Johns Hopkins Medical Institutions. Elizabeth Rodini, director of the Program in Museums and Society at JHU, wrote that this work, “reveals the layers of meaning embedded in an academic culture…. Although some of us … work regularly with objects, even we often fail to consider how these objects are accumulated and brought into meaningful assemblages.”

9-01
Mark Dion, concept drawing for An Archaeology of Knowledge, 2011. Courtesy Mark Dion Studio, New York, NY.

Some of the artifacts were gathered from intentional collections and archives from across JHU’s disciplinary divisions. Other objects were gathered by an extensive search by the artist, curator Jackie O’Regan, and other collaborators through storage vaults, attics, broom closets, and basements as well as encounters with individuals on campus who collected and even hoarded the “stuff” of knowledge that make up the material fabric of JHU. Even the cabinets themselves were a part of JHU history, repurposed from the Roseman Laboratory.

Dion writes, “This artwork hearkens back to the infancy of our culture’s collaborations across the arts and sciences, as each artifact takes on a more poetic, subjective, and perhaps allegorical meaning, all the while maintaining its original status as a tool for learning…. An Archaeology of Knowledge provides us with an awesome, expansive visual impression that evokes wonder, stimulates curiosity, and produces knowledge through a direct and variegated encounter with the physical world.” Dion’s work reminds us of the power of objects to convey meaning and to preserve history.

Below is a selection of artifacts from the cases and drawers, including: an early X-ray tube, a sixteenth century Mesoamerican stone face plaque, a first century Roman pedestal with inscription, early 20th century lacrosse balls, an anesthesia kit, and assorted pressure gauges and light bulbs.

10

Below: pipet bulbs, diagnostic eyeglasses, an early 20th century X-ray tube, and a late 19th century egg collection.

11-01
11-02
Drawer photos by John Dean.

A selection of artifacts from the cases and drawers, including: (below) a late-eighteenth century English linen press, an early-twentieth century practice clavier, and an 1832 portrait of “Mrs. Samuel Hopkins” by Alfred jab Miller that was commissioned by her son Johns Hopkins.

12

Below: various trophies and awards.

13-01
13-02

Mark Dion has exhibited his artwork internationally including at the Tate Gallery, London, and the Museum of Modern Art, New York. He is featured in the PBS series Art: 21. He teaches in the Visual Arts Department of Columbia University.

—JD Talasek

Archives – Fall 2014

96

Ruben Nieto Thor Came Down to Help Mighty Mouse Against Magneto. Oil on canvas. 32 x 48 inches. 2013

Artist Ruben Nieto grew up in Veracruz, Mexico, reading U.S. comic books, a memory that plays an important role in his creative process today. His paintings recast the formal visual elements of comic books with a strong influence of Abstract Expressionism and Pop Art. Using computer software to transform and alter the structure of the original comic book drawings, Nieto proceeds to make oil paintings based on the new decontextualized imagery. In his own words, “Forms and shapes coincide and drift on planes of varying depth, resulting in comic abstractions with a contemporary ‘pop’ look.”

Nieto received his Master of Fine Arts degree in Arts and Technology from the University of Texas at Dallas in 2008 and has since exhibited throughout the United States and Mexico.

Image courtesy of the artist.