Author Archives: issues

How Hurricane Sandy Tamed the Bureaucracy

ADAM PARRIS

A practical story of making science useful for society, with lessons destined to grow in importance.

Remember Hurricane Irene? It pushed across New England in August 2011, leaving a trail of at least 45 deaths and $7 million in damages. But just over a year later, even before the last rural bridge had been rebuilt, Hurricane Sandy plowed into the New Jersey–New York coast, grabbing the national spotlight with its even greater toll of death and destruction. And once again, the region—and the nation—swung into rebuild mode.

Certainly, some rebuilding after such storms will always be necessary. However, this one-two punch underscored a pervasive and corrosive aspect of our society: We have rarely taken the time to reflect on how best to rebuild developed areas before the next crisis occurs, instead committing to a disaster-by-disaster approach to rebuilding.

Yet Sandy seems to have been enough of a shock to stimulate some creative thinking at both the federal and regional levels about how to break the cycle of response and recovery that developed communities have adopted as their default survival strategy. I have witnessed this firsthand as part of a team that designed a decision tool called the Sea Level Rise Tool for Sandy Recovery, to support not just recovery from Sandy but preparedness for future events. The story that has emerged from this experience may contain some useful lessons about how science and research can best support important social decisions about our built environment. Such lessons are likely to be of increasing importance as predicted climate change brings the inevitability of extreme weather events.

A story of cooperation

In the wake of Sandy, pressure mounted at all levels, from local to federal, to address one question: How would we rebuild? This question obviously has many dimensions, but one policy context cuts across them all. The National Flood Insurance Program provides information on flood risk that developers, property owners, and city and state governments are required to use in determining how to build and rebuild. Run by the Federal Emergency Management Agency (FEMA), the program provides information on the height of floodwaters, known as flood elevations, that can be used to delineate on a map where it is more or less risky to build. Flood elevations are calculated based on analysis of how water moves over land during storms of varying intensity, essentially comparing the expected elevation of the water surface to that of dry land. FEMA then uses this information to create flood insurance rate maps, and insurers use the maps to determine the cost of insurance in flood-prone areas. The cost of insurance and the risk of flooding are major factors for individuals and communities in determining how high to build structures and where to locate them to avoid serious damage during floods.

But here’s the challenge that our team faced after Sandy. The flood insurance program provided information on flood risk based only on conditions in past events, and not on conditions that may occur tomorrow. Yet coastlines are dynamic. Beaches, wetlands, and barrier islands all change in response to waves and tides. These natural features shift, even as the seawalls and levees that society builds to keep communities safe are designed to stay in place. In fact, seawalls and levees add to the complexity of the coastal environment and lead to new and different changes in coastal features. The U.S. Army Corps of Engineers implements major capital works, including flood protection and beach nourishment, to manage these dynamic features. The National Oceanic and Atmospheric Administration (NOAA) helps communities manage the coastal zone to preserve the amenities we have come to value on the coast: commerce, transportation, recreation, and healthy ecosystems, among others. And both agencies have long been doing research on another major factor of change for coastlines around the world: sea-level rise.

Any amount of sea-level rise, even an inch or two, increases the elevation of floodwaters for a given storm. Estimates of future sea-level rise are therefore a critical area of research. As Sandy approached, experts from NOAA and the Army Corps, other federal agencies, and several universities were completing a report synthesizing the state of the science on historic and future sea-level rise. The report, produced as part of a periodic updating of the National Climate Assessment, identified scenarios (plausible estimates) of global sea-level rise by the end of this century. Coupled with the best available flood elevations, the sea-level rise scenarios could help those responsible for planning and developing in coastal communities factor future risks into their decisions. This scenario-planning approach underscores a very practical element of risk management: If there’s a strong possibility of additional risk in the future, factor that into decisions today.

Few people would argue with taking steps to avoid future risk. But making this happen is not as easy as it sounds. FEMA has to gradually incorporate future flood risk information into the regulatory program even as the agency modernizes existing flood elevations and maps. The program dates back to 1968, and much of the information on flood elevations is well over 10 years old. We now have newer information on past events, more precise measurements on the elevation of land surfaces, and better understanding of how to model and map the behavior of floodwaters. We also have new technologies for providing the information via the Internet in a more visually compelling and user-specific manner. Flood elevations and flood insurance rate maps have to be updated for thousands of communities across the nation. When events like Sandy happen, FEMA issues “advisory” flood elevations to provide updated and improved information to the affected areas even if the regulatory maps are not finalized. However, neither the updated maps nor the advisory elevations have traditionally incorporated sea-level rise.

Only in 2012 did Congress pass legislation—the Biggert-Waters Flood Insurance Reform Act—authorizing FEMA to factor sea-level rise into flood elevations provided by the flood insurance program, so the agency has had little opportunity to accomplish this for most of the nation. Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

Of course, reacting to any additional flood risk resulting from higher sea levels might entail the immediate costs of building higher, stronger, or in a different location altogether. But such short-term costs are counterbalanced by the long-term benefits of health and safety and a smaller investment in maintenance, repair, and rebuilding in the wake of a disaster. So how does the federal government provide legitimate science—science that is seen by decisionmakers as reliable and legitimate—regarding future flood risk to affected communities? And how might it create incentives, financial and otherwise, for adopting additional risk factors that may mean up-front costs in return for major long-term gains?

After Sandy, leaders of government locally and nationally were quick to recognize these challenges. President Barack Obama established a Hurricane Sandy Rebuilding Task Force. Governor Mario Cuomo of New York established several expert committees to help develop statewide plans for recovery and rebuilding. Governor Chris Christie of New Jersey was quick to encourage higher minimum standards for rebuilding by adding 1 foot to FEMA’s advisory flood elevations. And New York City Mayor Michael Bloomberg created the Special Initiative on Risk and Resilience, connected directly to the city’s long-term planning efforts and to an expert panel on climate change, to build the scientific foundation for local recovery strategies.

Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

The leadership and composition of the groups established by the president and the mayor were particularly notable and distinct from conventional efforts. They brought expertise and emphasis that focused as strongly on preparedness for a future that is likely to look different from the present, as on responding to the disaster itself. For example, the president’s choice of Shaun Donovan, secretary of the Department of Housing and Urban Development (HUD), to chair the federal task force implicitly signaled a new focus on ensuring that urban systems will be resilient in the face of future risks.

New York City’s efforts have been exemplary in this regard. The organizational details are complex, but there is one especially crucial part of the story that I want to tell. When Mayor Bloomberg created the initiative on risk and resilience, he also reconvened the New York City Panel on Climate Change (known locally as the NPCC), which had been begun in 2008 to support the formulation of a long-term comprehensive development and sustainability plan, called PlaNYC. All of these efforts, which were connected directly to the Mayor’s Office of Long-term Planning and Sustainability, were meant to be forward-looking and to integrate contributions from experts in planning, science, management, and response.

Tying the response to Sandy to the city’s varied efforts signaled a new approach to post-disaster development that embraced long-term resilience: the capacity to be prepared for an uncertain future. In particular, the NPCC’s role was to ensure that the evolving vulnerabilities presented by climate change would play an integral part in thinking about New York in the post-Sandy era. To this end, in September 2012, the City Council of New York codified the operations of the NPCC into the city’s charter, calling for periodic updates of the climate science information. Of course, science-based groups such as the climate panel should be valuable for communities and decisionmakers thinking about resilience and preparedness, but often they are ignored. Thus, another essential aspect of New York’s approach was that the climate panel was not just a bunch of experts speaking from a pulpit of scientific authority, but it also had members representing local and state government working as full partners.

Within NOAA, there are programs designed to improve decisions on how to build resilience into society, given the complex and uncertain interactions of a changing society and a changing environment. These programs routinely encourage engagement among different scales and sectors of government and resource management. For example, NOAA’s Regional Integrated Sciences and Assessments (RISA) program provides funding for experts to participate in New York’s climate panel to develop risk information that informs both the response to Sandy and the conceptual framework for adaptively managing long-term risk within PlaNYC. Through its Coastal Services Center, NOAA also provides scientific tools and planning support for coastal communities facing real-time challenges. When Sandy occurred, the center offered staff support to FEMA’s field offices that were the local hubs among emergency management and disaster relief. Such collaboration and interactions between the RISA experts, the center staff, and the FEMA field offices fostered social relations that allowed for coordination in developing the Sea Level Rise Tool for Sandy Recovery.

In still other efforts, representatives of the president’s Hurricane Sandy Rebuilding Task Force and the Council on Environmental Quality were working with state and local leaders, including staff from the New York City’s risk and resilience initiative. The leaders of the New York initiative were working with representatives of NOAA’s RISA program, as well as with experts on the NPCC who had participated in producing the latest sea-level rise scenarios for the National Climate Assessment. The Army Corps participated in the president’s Task Force and also contributed to the sea-level rise scenarios report. This complex organizational ecology also helped create a social network among professionals in science, policy, and management charged with building a tool that can identify the best available science on sea-level rise and coastal flooding to support recovery for the region.

We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment.

Before moving on to the sea-level rise tool itself, I want to point out important dimensions of this social network and the context that facilitated such complex organizational coordination. Sandy presented a problem that motivated people in various communities of practice to work with each other. We all knew each other, wanted to help recovery efforts, and understood the limitations of the flood insurance program. In the absence of events such as Sandy, it is difficult to find such motivating factors; everyone is busy with his or her day-to-day responsibilities. Disaster drew people out of their daily routines with a common and urgent purpose. Moreover, programs such as RISA have been doing research not just to provide information on current and future risks associated with climate, but also to understand and improve the processes by which scientific research can generate knowledge that is both useful and actually used. Research on integrated problems and management across institutions and sectors is undervalued; how best to organize and manage such research is poorly understood in the federal government. Those working on this problem themselves constitute a growing community of practice.

Communities need to be able to develop long-term planning initiatives, such as New York’s PlaNYC, that are supported by bodies such as the city’s climate change panel. In order to do so, they have to establish networks of experts with whom they can develop, discuss, and jointly produce knowledge that draws on relevant and usable scientific information. But not all communities have the resources of New York City or the political capacity to embrace climate hazards. If the federal government wishes to support other communities in better preparing people for future disasters, it will have to support the appropriate organizational arrangements—especially those that can bridge boundaries between science, planning, and management.

Rising to the challenges

For more than two decades, the scientific evidence has been strong enough to enable estimates of sea-level rise to be factored into planning and management decisions. For example, NOAA maintains water-level stations (often referred to as tide gages) that document sea-level change, and over the past 30 years, 88% of the 128 stations in operation have recorded a rise in sea level. Based on such information, the National Research Council published a report in 1987 estimating that sea level would rise between 0.5 and 1.5 meters by 2100. More recent estimates suggest it could be even higher.

Of course, many coastal communities have long been acutely aware of the gradual encroachment of the sea on beaches and estuaries, and the ways in which hurricanes and tropical storms can remake the coastal landscape. So, why is it so hard to decide on a scientific basis for incorporating future flood risk into coastal management and development?

For one thing, sea-level rise is different from coastal flooding, and the science pertaining to each is evolving somewhat independently. Researchers worldwide are analyzing the different processes that contribute to sea-level rise. They are thinking about, among other things, how the oceans will expand as they absorb heat from the atmosphere; about how quickly ice sheets will melt and disintegrate in response to increasing global temperature, thereby adding volume to the oceans; and about regional and local processes that cause changes in the elevation of the land surface independent of changes in ocean volume. Scientists are experimenting, and they cannot always experiment together. They have to isolate questions about the different components of the Earth system to be able to test different assumptions, and it is not an easy task to put the information back together again. This task of synthesizing knowledge from various disciplines and even within closely related disciplines requires interdisciplinary assessments.

The sea-level rise scenarios that our team used in designing the Sandy tool, which derived from the National Climate Assessment prepared for Congress every four years to help synthesize and summarize the state of the climate and its impacts on society, varied greatly. The scenarios were based on expert judgments from the scientific literature by a diverse team drawn from the fields of climate science, oceanography, geology, engineering, political science, and coastal management, and representing six federal agencies, four universities, and one local resource management organization. The scenarios report provided a definitive range of 8 inches to 6.6 feet by the end of the century. (One main reason for such different projections is the current inadequate understanding of the rate at which the ice sheets in Greenland and Antarctica are melting and disintegrating in response to increasing air temperature.) The scenarios were aimed at two audiences: regional and local experts who are charged with addressing variations in sea-level change at specific locations, and national policymakers who are reconsidering potential impacts beyond any individual community, city, or even state.

But wasn’t the choice of the experts who prepared the scenarios to present such a broad range of sea-level rise estimates simply adding to policymakers’ uncertainty about the future? The authors addressed this possible concern by associating risk tolerance—the amount of risk one would be willing to accept for a particular decision—with each scenario. For example, they said that anyone choosing to use the lowest scenario is accepting a lot of risk, because there is a wealth of evidence and agreement among experts that sea-level rise will exceed this estimate by the end of the century unless (and possibly even if) aggressive global emissions reduction measures are taken immediately. On the other hand, they said that anyone choosing to use the highest scenario is using great caution, because there is currently less evidence to support sea-level rise of this magnitude by the end of the century (although it may rise to such levels in the more distant future).

Thus, urban planners may want to consider higher scenarios of sea-level rise, even if they are less likely, because this approach will enable them to analyze and prepare for risks in an uncertain future. High sea-level rise scenarios may even provide additional factors of safety, particularly where the consequences of coastal flood events threaten human health, human safety, or critical infrastructure—or perhaps all three. The most likely answer might not always be the best answer for minimizing, preparing for, or avoiding risk. Framing the scenarios in this fashion helps avoid any misperceptions about exaggerating risk. But more importantly, it supports deliberation in planning and making policy about the basis for setting standards and policies and for designing new projects in the coastal zone. The emphasis shifts to choices about how much or how little risk to accept.

In contrast to the scenarios developed for the National Climate Assessment, the estimates made by the New York City climate panel addressed regional and local variations in sea-level rise and are customized to support design and rebuilding decisions in the city that respond to risks over the next 25 to 45 years. They were developed after Sandy by integrating scientific findings published just the previous year—after the national scenarios report was released. The estimates were created using a combination of 24 state-of-the-art global climate models, observed local data, and expert judgment. Each climate model can be thought of as an experiment that includes different assumptions about global-scale processes in the Earth system (such as changes in the atmosphere). As with the national scenarios report, then, the collection of models provides a range of estimates of sea-level rise that in total convey a sense of the uncertainties. The New York City climate panel held numerous meetings throughout the spring of 2013 to discuss the model projections and to frame its own statements about the implications of the results for future risks to the city arising from sea-level rise (e.g., changes in the frequency of coastal flooding due to sea-level rise). These meetings were attended by not only physical and social scientists but also by decisionmakers facing choices at all stages of the Sandy rebuilding process, from planning to design to engineering and construction.

As our team developed the sea-level rise tool, we found minimal difference between the models used by the New York climate panel and the nationally produced scenarios. At most, the extreme national scenarios and the high-end New York projections were separated by 3 inches, and the intermediate scenarios and the mean model values were separated by 2 inches. This discrepancy is well within the limits of accuracy reflected in current knowledge of future sea-level rise. But small discrepancies can make a big difference in planning and policymaking.

New York State regulators evaluating projects proposed by organizations that manage critical infrastructure, such as power plants and wastewater treatment facilities, look to science vetted by the federal government as a basis for approving new or rebuilt infrastructure. Might the discrepancies between the scenarios produced for the National Climate Assessment and the projections made by the NPCC, however small, cause regulators to question the scientific and engineering basis for including future sea-level rise in their project evaluations? Concerned about this prospect, the New York City Mayor’s Office wanted the tool to use only the projections of its own climate panel.

The complications didn’t stop there. In April 2013, HUD Secretary Donovan announced a Federal Flood Risk Reduction Standard, developed by the Hurricane Sandy Rebuilding Task Force, for federal agencies to use in their rebuilding and recovery efforts in the regions affected by Sandy. The standard added 1 foot to the advisory flood elevations provided by the flood insurance program. Up to that point, our development team had been working in fairly confidential settings, but now we had to consider additional questions. Would the tool be used to address regulatory requirements of the flood insurance program? Why use the tool instead of the advisory elevations or the Federal Flood Risk Reduction Standard? How should decisionmakers deal with any differences between the 1-foot advisory elevation and the information conveyed by the tool? We spent the next two months addressing these questions and potential confusion over different sets of information about current and future flood risk.

Our team—drawn from NOAA, the Army Corps, FEMA, and the U.S. Global Change Research Program—released the tool in June 2013. It provides both interactive maps depicting flood-prone areas and calculators for estimating future flood elevations, all under different scenarios of sea-level rise. Between the time of Secretary Donovan’s announcement and the release of the tool, the team worked extensively with representatives from FEMA field offices, the New York City climate panel, the New York City Mayor’s Office, and the New York and New Jersey governors’ offices to ensure that the choices about the underlying scientific information were well understood and clearly communicated. The social connections were again critical in convening the right people from the various levels of government and the scientific and practitioner communities.

During this period, the team made key changes in how the tool presented information. For example, the Hurricane Sandy Rebuilding Task Force approved the integration of sea-level rise estimates from the New York climate panel into the tool, providing a federal seal of approval that could give state regulators confidence in the science. This decision also helped address the minimal discrepancies between the long-term scenarios of sea-level rise made for the National Climate Assessment and the shorter-term estimates made by the New York climate panel. The President’s Office of Science and Technology Policy also approved expanding access to the tool via a page on the Global Change Research Program’s Web site [http://www.globalchange.gov/what-we-do/assessment/coastal-resilience-resources]. This access point helped distinguish the tool as an interagency product separate from the National Flood Insurance Program, thus making clear that its use was advisory, not mandated by regulation. Supporting materials on the Web site (including frequently asked questions, metadata, planning context, and disclaimers, among others) provided background detail for various user communities and also helped to make clear that the New York climate panel sea-level rise estimates were developed through a legitimate and transparent scientific process.

The process of making the tool useful for decisionmakers involved diverse players in the Sandy recovery story discussing different ideas about how people and organizations were considering risk in their rebuilding decisions. For example, our development team briefed a diverse set of decisionmakers in the New York and New Jersey governments to facilitate deliberations about current and future risk. Our decision to use the New York City climate panel estimates in the tool helped to change the recovery and rebuilding process from past- to future-oriented, not only because the science was of good quality but because integration of the panel’s numbers into the tool brought federal, state, and city experts and decisionmakers together, while alleviating the concerns of state regulators about small discrepancies between different sea-level rise estimates.

In 2013, New York City testified in a rate case (the process by which public utilities set rates for consumers) and called for Con Edison (the city’s electric utility) and the Public Service Commission to ensure that near-term investments are made to fortify utility infrastructure assets. Con Edison has planned for $1 billion in resiliency investments that address future risk posed by climate change. As part of this effort, the utility has adopted a design criteria that uses FEMA’s flood insurance rate maps that are based on 100-year flood elevations, plus 3 feet to account for a high-end estimate of sea-level rise by mid-century. This marked the first time in the country that a rate case explicitly incorporated consideration of climate change.

New York City also passed 16 local laws in 2013 to improve building codes in the floodplain, to protect against future risk of flooding, high winds, and prolonged power outages. For example, Local Law 96/2013 adopted FEMA’s updated flood insurance rate maps with additional safety standards for some single-family homes, based on sea-level rise as projected by the NPCC.

Our development team would never have known about New York City’s need to develop a rate case with federally vetted information on future risk, if we had not worked with officials from the city’s planning department. Engaging city and state government officials was useful not just for improving the clarity and purpose of the information in the tool. It was also useful for choosing what information would be included in the tool to enable a comprehensive and implementable strategy.

The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used.

Different scales of government—local, state, and federal—have to be able to lead processes for bringing appropriate knowledge and standards into planning, design, and engineering. Conversely, all scales of government need to validate the standards revealed by these processes, because they all play a role in implementation.

Building resilience capacity

This complex story has a particularly important yet unfamiliar lesson: Planning departments are key partners in helping break the cycle of recovery and response, and in helping people adopt lessons learned from science into practice. Planners at different levels of government convene different communities of practice and disciplinary expertise around shared challenges. Coincidentally, scientific organizations that cross the boundaries between these different communities—such as the New York City climate panel and the team that developed the sea-level rise tool—can also encourage those interactions. As I’ve tried to illustrate, planning departments convene scientists and decisionmakers alike to work across organizational boundaries that under normal circumstances help to define their identities. These are important ingredients for preparing for future natural disasters and increasing our resilience to them over the long term, and yet this type of science capacity is barely supported by the federal government. How might the lessons from the Sandy Sea Level Rise Recovery Tool and Hurricane Sandy be more broadly adopted to help the nation move away from disaster-by-disaster policy and planning? Here are two ideas to consider in the context of coastal resilience.

First, re-envision the development of resilient flood standards as planning processes, not just numbers or codes.

Planning is a comprehensive and iterative function in government and community development. Planners are connected to or leading the development of everything from capital public works projects to regional plans for ecosystem restoration. City waterfronts, wildlife refuges and restored areas, and transportation networks all draw the attention of planning departments.

In their efforts, planners seek to keep development goals rooted in public values, and they are trained, formally and informally, in the process of civic engagement, in which citizens have a voice in shaping the development of their community. Development choices include how much risk to accept and whether or how the federal government regulates those choices. For this reason, planners maintain practical connections to existing regulations and laws and to the management of existing resources. Their position in the process of community development and resource management requires planners to also be trained in applying the results of research (such as sea-level rise scenarios) to design and engineering. Over the past decade, many city, state, and local governments have either explicitly created sustainability planner positions in high levels (such as mayors’ or governors’ offices) or reframed their planning departments to emphasize sustainability, as in the case of New York City. The planners in these positions are incredibly important for building resilience into urban environments; not because they see the future, but because they provide a nucleus for convening the diverse constituencies from which visions of, and pathways to, the future are imagined and implemented.

If society is to be more resilient, planners must be critical actors in government. We cannot expect policymakers and the public to simply trust or comprehend or even find useful what we learn from science. We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment. And yet, despite their critical role in achieving resilience, many local planning departments across the country have been eliminated during the economic downturn.

Second, configure part of our research and service networks to be flexible in response to emergent risk.

The federal government likes to build new programs, sometimes at the expense of working through existing ones, because new initiatives can be political instruments for demonstrating responsiveness to public needs. But recovery from disasters and preparation to better respond to future disasters can be supported through existing networks. Across the span of lands under federal authority, FEMA has regional offices that work with emergency managers, and NOAA supports over 50 Sea Grant colleges that engage communities in science-based discussions on issues related to coastal management. Digital Coast, a partnership between NOAA and six national, regional, and state planning and management organizations, provides timely information on coastal hazards and communities. These organizations work together to develop knowledge and solutions for planners and managers in coastal zones, in part by funding university-based science-and-assessment teams. The interdisciplinary expertise and localized focus of such teams help scientists situate climate and weather information in the context of ongoing risks such as sea-level rise and coastal flooding. All of these efforts contributed directly and indirectly to the Sea Level Rise Tool before, during, and after Hurricane Sandy.

The foundational efforts of these programs exemplify how science networks can leverage their relationships and expertise to get timely and credible scientific information into the hands of people who can benefit from it. Rather than creating new networks or programs, the nation could support efforts explicitly designed to connect and leverage existing networks for risk response and preparation. The story I’ve told here illustrates how existing relationships within and between vibrant communities of practice are an important part of the process of productively bringing science and decisionmaking together. New programs are much less effective in capitalizing on those relationships.

One way to support capacities that already exist would be to anticipate the need to distribute relief funds to existing networks. This idea could be loosely based on the Rapid Response Research Grants administered by the National Science Foundation, with a couple of important variations from its usual focus on supporting basic research. Agencies could come together to identify a range of planning processes supported by experts who work across communities of practice to ensure a direct connection to preparedness for future natural disasters of the same kind. These priority-setting exercises might build on the interagency discussions that occur as part of the federal Global Change Research Program. Also, since any such effort would require engagement between decisionmakers and scientists, recipients of this funding would be asked to report on the nature of additional, future engagement. What further engagement is required? Who are the critical actors, and are they adequately supported to play a role in resilience efforts? How are those networks increasing resilience over time? Gathering information about questions such as these is critical for the federal government to make science policy decisions that support a sustainable society.

Working toward a collective vision

The shift from reaction and response to preparedness seems like common sense, but as this story illustrates, it is complicated to achieve. One reaction to this story might be to replicate the technology in the sea-level rise tool or to apply the same or similar information sets elsewhere. The federal government has already begun such efforts, and this approach will supply people with better information.

Yet across the country, there are probably hundreds of similar decision tools developed by universities, nongovernmental organizations, and businesses that depict coastal flooding resulting from sea-level rise. The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used. By connecting those discussions to existing planning processes, we reached different scales of government with different responsibilities and authority for reaching the overarching goal of developing more sustainable urban and coastal communities.

This story suggests that the role of science in helping society to better manage persistent environmental problems such as sea-level rise is not going to emerge from research programs isolated from the complex social and institutional settings of decisionmaking. Science policies aimed at achieving a more sustainable future must increasingly emphasize the complex and time-consuming social aspects of bringing scientific advance and decisionmaking into closer alignment.

Adam Parris is program manager of Regional Integrated Sciences and Assessments at the National Oceanic and Atmospheric Administration.

Breaking the Climate Deadlock

DAVID GARMAN

KERRY EMANUEL

BRUCE PHILLIPS

Developing a broad and effective portfolio of technology options could provide the common ground on which conservatives and liberals agree.

The public debate over climate policy has become increasingly polarized, with both sides embracing fairly inflexible public positions. At first glance, there appears little hope of common ground, much less bipartisan accord. But policy toward climate change need not be polarizing. Here we offer a policy framework that could appeal to U.S. conservatives and progressives alike. Of particular importance to conservatives, we believe, is the idea embodied in our framework of preserving and expanding, rather than narrowing, societal and economic options in light of an uncertain future.

This article reviews the state of climate science and carbon-free technologies and outlines a practical response to climate deadlock. Although it may be difficult to envision the climate issue becoming depoliticized to the point where political leaders can find common ground, even the harshest positions at the polar extremes of the current debate need not preclude the possibility.

We believe that a close look at what is known about climate science and the economic competitiveness of low-carbon/carbon-free technologies—which include renewable energy, advanced energy efficiency technologies, nuclear energy, and carbon capture and sequestration systems (CCS) for fossil fuels—may provide a framework that could even be embraced by climate skeptics willing to invest in technology innovation as a hedge against bad climate outcomes and on behalf of future economic vitality.

Most atmospheric scientists agree that humans are contributing to climate change. Yet it is important to also recognize that there is significant uncertainty regarding the pace, severity, and consequences of the climate change attributable to human activities; plausible impacts range from the relatively benign to globally catastrophic. There is also tremendous uncertainty regarding short-term and regional impacts, because the available climate models lack the accuracy and resolution to account for the complexities of the climate system.

Although this uncertainty complicates policymaking, many other important policy decisions are made in conditions of uncertainty, such as those involving national defense, preparation for natural disasters, or threats to public health. We may lack a perfect understanding of the plans and capabilities of a future adversary or the severity and location of the next flood or the causes of a new disease epidemic, but we nevertheless invest public resources to develop constructive, prudent policies and manage the risks surrounding each.

Reducing atmospheric concentrations of greenhouse gases (GHGs) would require widespread deployment of carbon-free energy technologies and changes in land-use practices. Under extreme circumstances, addressing climate risks could also require the deployment of climate remediation technologies such as atmospheric carbon removal and solar radiation management. Unfortunately, leading carbon-free electric technologies are currently about 30 to 290% more expensive on an unsubsidized basis than conventional fossil fuel alternatives, and technologies that could remove atmospheric carbon from the atmosphere or mitigate climate impacts are mostly unproven and some may have dangerous consequences. At the same time, the pace of technological change in the energy sector is slow; any significant decarbonization will unfold over the course of decades. These are fundamental hurdles.

It is also reasonably clear, particularly after taking into account the political concerns about economic costs, that widespread deployment of carbon-free technologies will not take place until diverse technologies are fully demonstrated at commercial scale and the cost premium has been reduced to a point where the public views the short-term political and economic costs as being reasonably in balance with plausible longer-term benefits.

Given these twin assessments, we propose a practical approach to move beyond climate deadlock. The large cost premium and unproven status of many technologies point to a need to focus on innovation, cost reduction, and successfully demonstrating multiple strategically important technologies at full commercial scale. At the same time, the uncertainty of long-term climate projections, together with the 1000+ year lifetime of CO2 in the atmosphere, argues for a measured and flexible response, but one that can be ramped up quickly.

This can be done by broadening and intensifying efforts to develop, fully demonstrate, and reduce the cost of a variety of carbon-free energy and climate remediation technologies, including carbon capture and sequestration and advanced nuclear, renewable, and energy efficiency technologies. In addition, atmospheric carbon removal and solar radiation management technologies should be carefully researched.

Conservatives have typically been strong supporters of fundamental government research, as well as technology development and demonstration in areas that the private sector does not support, such as national security and health. Also, even the most avowed climate skeptic will often concede that there are risks of inaction, and that it is prudent for national and global leaders to hedge against those risks, just as a prudent corporate board of directors will hedge against future risks to corporate profitability and solvency. Moreover, increasing concern about climate change abroad suggests potentially large foreign markets for innovative energy technologies, thus adding an economic competitiveness rationale for investment that does not depend on one’s assessment of climate risk.

Some renewed attention is being devoted to innovation, but funding is limited and the scope of technologies is overly constrained. Our suggested policy approach, in contrast, would involve a three- to fivefold increase in R&D and demonstration spending in both the public and private sectors, including possible new approaches that involve more than simply providing the funding through traditional channels such as the Department of Energy (DOE) and the national labs.

Investing in the development of technology options is a measured, flexible approach that could also shorten the time needed to decarbonize the economy. It would give future policymakers more opportunities to deploy proven, lowercost technologies, without the commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomic. And with greater emphasis on innovation, it would allow technologies to be deployed more quickly, broadly, and cost-effectively, which would be particularly important if impacts are expected to be rapid and severe.

In addition to research, development, and demonstration (RD&D), new policy options to support technology deployment should be explored. Current deployment programs principally using the tax code have not, at least to date, successfully commercialized technologies in a widespread and cost-effective manner or provided strong incentives for continued innovation. New approaches are necessary.

Climate knowledge

Although new research constantly adds to the state of scientific knowledge, the basic science of climate change and the role of human-generated emissions have been reasonably well understood for at least several decades. Today, most climate scientists agree that human-caused warming is underway. Some of the major areas of agreement include the following:

  • GHGs, which include water vapor, carbon dioxide (CO2), and other gases, trap heat in the atmosphere and warm the earth by allowing solar radiation to pass largely unimpeded to the surface of the earth and re-radiating a portion of the thermal radiation received from the earth back toward the surface. This is the “greenhouse effect.”
  • Paleoclimatology, which is the study of past climate conditions based on the geologic record, shows that changing levels of GHGs in the atmosphere have been associated with climatic change as far back as the geological record extends.
  • The concentration of CO2 in the atmosphere has increased from about 280 parts per million (ppm) in preindustrial times to about 400 ppm today, an increase of 43%. Ice core records suggest that the current level is higher than at any time over at least the past 650,000 years, whereas analysis of marine sediments suggests that CO2 levels have not been this high in at least 2.1 million years.
  • Human-made (anthropogenic) CO2 emissions, primarily resulting from the consumption of fossil fuels, are probably responsible for much of the warming observed in recent decades. Climate scientists attempting to replicate climate patterns over the past 30 years have not been able to do so without accounting for anthropogenic GHGs and sulfate aerosols.
  • CO2 emissions are also contributing to increases in surface ocean acidity, which degrades ocean habitats, including important commercial fisheries.
  • Given the current rate of global emissions, atmospheric concentrations of CO2 could reach twice the preindustrial level within the next 50 years, concentration levels our planet has not experienced in literally millions of years.
  • The global climate system has tremendous inertia. Due to the persistence of CO2 in the atmosphere and the oceans, many of the effects of climate change will not diminish naturally for hundreds of years if not longer.

About these basic points there is little debate, even from those who believe that the risks are not likely to be severe. Indeed, it is also true that long-term climate projections are subject to considerable uncertainty and legitimate scientific debate. The fundamental complexity of the climate system, in particular the feedback effects of clouds and water vapor, is the most important contributor to uncertainty. Consequently, long-term projections reflect considerable uncertainty in how rapidly, and to what extent, temperatures will increase over time. It is possible that the climate will be relatively slow to warm and that the effects of warming may be relatively mild for some time. But there is also a worrisome likelihood that the climate will warm too quickly for society to adapt and prosper—with severe or perhaps even catastrophic consequences.

Unfortunately, we should not expect the range of climate projections to narrow in a meaningful way soon; policymakers may hope for the best but must prepare for the worst.

Technology readiness

Under the best of circumstances, the risks associated with climate uncertainties could be managed, at least in part, with a mix of today’s carbon-free energy and climate remediation technologies. Carbon-free energy generation, as used in this paper, includes renewable, nuclear, and carbon capture and sequestration systems for fossil fuels such as coal and natural gas. Climate remediation technologies (often grouped together under the term “geoengineering”) include methods for removing greenhouse gases from the atmosphere (such as air capture), as well as processes that might mitigate some of the worst effects of climate change (such as solar radiation management). We note that energy efficiency or the pursuit of greater energy productivity is prudent even in the absence of climate risk, so it is particularly important in the face of it. Although this discussion focuses on electric generation, any effective decarbonization policy will also need to address emissions from the transportation sector; the residential, commercial, and industrial sectors; and land use. Similar frameworks, focused on expanding sensible options and hedging against a worst-case future, could be developed for each.

To be effective, carbon-free and climate remediation technologies and processes need to be economically viable, fully demonstrated at scale (if they have not yet been), and be capable of global deployment in a reasonably timely manner. Moreover, they would also need to be sufficiently diverse and economical to be deployed in varied regional economies across the world, ranging from the relatively low-growth developed world to the rapidly growing developing nations, particularly those with expanding urban centers such as China and India.

The list of strategically essential climate technologies is not long, yet each of these technologies, in its current state of development, is limited in important ways. Although their status and prospects vary in different regions of the world, they are either not yet fully demonstrated, not capable of rapid widespread global deployment, or unacceptably expensive relative to conventional energy technologies. These limitations are well documented, if not widely recognized or acknowledged. The limitations of current technologies can be illustrated by quickly reviewing the status of a number of major electricity-generating technologies.

On-shore wind and some other renewable technologies such as solar photovoltaic (PV) have experienced dramatic cost reductions over the past three decades. These cost reductions, along with deployment subsidies, have clearly had an impact. Between 2009 and 2013, U.S. wind output more than doubled, and U.S. solar output increased by a factor of 10. However, because ground-level winds are typically intermittent, wind turbines cannot be relied on to generate electricity whenever there is electrical demand, and the amount of generating output cannot be directly controlled in response to moment-by-moment changes in electric demand and the availability of other generating resources. As a consequence, wind turbines do not produce electrical output of comparable economic value to the output of conventional generating resources such as natural gas–fired power plants that are, in energy industry parlance, both “firm” and “dispatchable.” Furthermore, the cost of a typical or average onshore wind project in the United States, without federal and state subsidies, although now less than that of new pulverized coal plants, is still substantially more than a new gas-fired combined-cycle plant, which is generally considered the lowest-cost conventional resource in most U.S. power markets. Solar PV also suffers from its intermittency and variability, and significant penetration of solar PV can test grid reliability and complicate distribution system operation, as we are now seeing in Germany. Some of these challenges can be overcome with careful planning and coordinated execution, but the scale-up potential and economics of these resources could be improved substantially by innovations in energy storage, as well as technological improvements to increase renewables’ power yield and capacity factor.

Current light-water nuclear power technology is also more expensive than conventional natural gas generation in the United States, and suffers from safety concerns, waste disposal challenges, and proliferation risks in some overseas markets. Further, given the capital intensity and large scale of today’s commercial nuclear plants (which are commonly planned as two 1,000–megawatt (MW) generating units), the total cost of a new nuclear plant exceeds the market capitalization of many U.S. electric utilities, making sole-ownership investments a “bet-the-company” financial decision for corporate management and shareholders. Yet recent improvements in costs have been demonstrated in overseas markets through standardized manufacturing processes and economies of scale; and many new innovative designs promise further cost reductions, improved safety, a smaller waste footprint, and less proliferation risk.

CCS technology is also limited. Although all major elements of the technology have been demonstrated successfully, and the process is used commercially in some industrial settings and for enhanced oil recovery (EOR), it is only now on track to being fully demonstrated at two commercial-scale electric generation facilities under construction, one in the United States and one in Canada. And deploying CCS on existing electric power plants would reduce generation efficiency and increase production costs to the point where such CCS retrofits would be uneconomic today without large government incentives or a carbon price higher than envisioned in recent policy proposals.

The cost premium of these carbon-free technologies relative to that of conventional natural gas–fired combined cycle technology in the United States is illustrated in the next chart.

As shown, the total levelized cost of new natural gas combined-cycle generation over its expected operating life is roughly $67/MWh (MWh, megawatt-hour). In contrast, typical onshore wind projects (without federal and state subsidies and without considering the cost of backup power and other grid integration requirements) cost about $87/MWh. New gas-fired combined-cycle plants with CCS cost approximately $93/MWh and nuclear projects about $108/MWh. New coal plants with CCS, solar PV, and offshore wind projects are yet more costly. Taken together, these estimates generally point to a cost premium of $20 to $194/MWh, or 29 to 290%, for low carbon generation.

Some may argue that this cost premium is overstated because it does not reflect the cost of the carbon externality. This would be accurate from a conceptual economic perspective, but from a commercial or customer perspective, it is understated because it doesn’t account for the substantial costs of providing backup or stored power to overcome intermittency problems. The practical effect of this cost difference remains: However the cost premium might be reduced over time (whether through carbon pricing, other forms of regulation, higher fossil fuel prices, or technological innovation), the gap today is large enough to constitute a fundamental impediment to developing effective deployment policies.

This is evidenced in the United States by the wind industry’s continued dependence on federal tax incentives, the difficulty of securing federal or state funding for proposed utility-scale CCS projects, the slow pace of developing new nuclear plants, and the recent controversies in several states proposing to develop new offshore wind and coal gasification projects. The inability to pass federal climate legislation can also be seen as an indication of widespread concern about the cost of emissions reductions using existing technologies, the effectiveness of the legislation in the global long-term context, or both.

FIGURE 1

79

Source: EIA LCOE in AEO 2013

Cost considerations are even more fundamental in the developing world, where countries’ overriding economic goal is to raise their population’s standard of living. This usually requires inexpensive sources of electricity, and technologies that are only available at a large cost premium are unlikely to be rapidly or widely adopted.

Although there is little doubt that there are opportunities to reduce the cost and improve the performance of today’s technologies, the history of technological transformation in the energy sector is typically slow, unpredictable, and incremental because it widely employs long-lived capital-intensive production and infrastructure assets tied together through complex global industries—characteristics contributing to tremendous inertia. Engineering breakthroughs are rare, and new technologies typically take many decades to reach maturity at scale, sometimes requiring the development of new business models. As described by Arnulf Grübler and Nebojsa Nakicenovic, scholars at the International Institute for Applied Systems Analysis (IIASA), the world has only made two “grand” energy transitions: one from biomass to coal between 1850 and 1920, and a second from coal to oil and gas between 1920 and today. The first transition lasted roughly 70 years; the second has now lasted approximately 90 years.

A similar theme is seen in the electric generating industry. In the 130 years or so since central generating stations and the electric lightbulb were first established, only a handful of basic electric generating technologies have become commercially widespread. By far the most common of these is the thermal power station, which uses energy from either the combustion of fossil fuels (coal, oil, and gas) or a nuclear reactor to operate a steam turbine, which in turn powers an electric generator.

The conditions that made energy system transitions slow in the past still exist today. Even without political gridlock, it could well take many decades to decarbonize the global energy sector, a period of time that would produce much higher atmospheric concentrations of CO2 and ever-growing greater risks to society. This points to the importance of beginning the long transition to decarbonize the economy as soon as possible.

Policy implications

Given the uncertainties in climate projection, innovation, and technology deployment, developing a broad range of technology options can be a hedge against climate risk.

Technology “options” (as the term is used here) include carbon-free technologies that are relatively costly or not fully demonstrated but with innovation through fundamental and applied RD&D might become sufficiently reliable, affordable, and scalable to be widely deployed if and when policymakers determine they are needed. (They are not to be confused with other technologies, such as controls for non-CO2 GHGs such as methane and niche EOR applications of fossil CCS, which have already been commercialized.)

A technology option is analogous to a financial option. The investment to create the technology is akin to the cost of buying the financial option; it gives the owner the right but not the obligation to engage in a later transaction.

Examples of carbon-free generation options include small modular nuclear reactors (SMRs) or advanced Generation IV nuclear reactor technologies such as sodium or gas-cooled fast reactors; advanced CCS technologies for both coal and natural gas plants; underground coal gasification with CCS (UCG/CCS); and advanced renewable technologies. Developing options on such technologies (assuming innovation success) would reduce the cost premium of decarbonization, the time required to decarbonize the global economy, and the risks and costs of quickly scaling up technologies that are not yet fully proven.

In contrast to carbon-free generation, climate remediation options could directly remove carbon from the atmosphere or mitigate some of its worst effects. Examples include atmospheric carbon removal technologies (such as air capture and sequestration, regional or continental afforestation, and ocean iron fertilization) and solar radiation management technologies (such as stratospheric aerosol injection and cloud-whitening systems.) Because these technologies have the potential to reduce atmospheric concentrations or global average temperatures, they could (if proven) reduce, reverse, or prevent some of the worst impacts of climate change if atmospheric concentrations rise to unacceptably high levels. The challenge with this category of technologies will be to reduce the cost and increase the scale of application while avoiding unintended environmental and ecosystem harms that would offset the benefits they create.

Again, investing now in the development of such technology options would not create an obligation to deploy them, but it would yield reliable performance and cost data for future policymakers to consider in determining how to most effectively and efficiently address the climate issue. That is the essence of an iterative risk management process. Such a portfolio approach would also position the country to benefit economically from the growing overseas markets for carbon-free generation and other low-carbon technologies. It also addresses the political and economic polarization around various energy options, with some ideologies and interests focused on renewables, others on nuclear energy, and still others on CCS. A portfolio approach not only hedges against future climate uncertainties but also offers expanded opportunities for political inclusiveness and economic benefit. Over a period of time, investments in new and expanded RD&D programs would lead to new intellectual property that could help grow investments, design, manufacturing, employment, sales, and exports to serve overseas and perhaps domestic markets.

Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency-Energy (ARPA-E), the scope of technologies is far too constrained.

This portfolio approach would be a significant departure from current innovation and deployment policies. Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency–Energy (ARPA-E), the scope of technologies is far too constrained. For instance, despite its importance, a fully funded program to demonstrate multiple commercial-scale post-combustion CCS systems for both coal and natural gas generating technologies has yet to be established. Similarly, efforts to develop advanced nuclear reactor designs are limited, and there is almost no government support for climate remediation technologies. Renewable energy can make a large contribution, but numerous studies have demonstrated that it will probably be much more difficult and costly to decarbonize our electricity system within the next half century without CCS and nuclear power.

Our approach, in contrast, would involve a broader mix of technologies and innovation programs including the fossil, advanced nuclear, advanced renewable, and climate remediation technologies to maximize our chances of creating proven, scalable, and economic technologies for deployment.

The specific deployment policies needed would depend in part on the choice of technologies and the status of their development, but they would probably encompass an expanded suite of programs across the RD&D-to-commercialization continuum, including fundamental and applied R&D programs, incentives, and other means to support pilot and demonstration programs, government procurement programs, and joint international technology development and transfer efforts.

The innovation processes used by the federal government also warrant assessment and possible reform. A number of important recent studies and reports have critiqued past and current policies and put forward recommendations to accelerate innovation. Of particular note are recommendations to provide greater support for demonstration projects, expand ARPA-E, create new institutions (such as a Clean Energy Deployment Administration, a Green Bank, an Energy Technology Corporation, Public-Private Partnerships, or Regional Innovation Investment Boards), and promote competition between government agencies such as DOE and the Department of Defense. All of these deserve further attention.

Of course there will never be enough money to do everything. That’s why a strategic approach is essential. The portfolio should focus on strategically important technologies with the potential to make a material difference, based on analytical criteria such as:

  • The likelihood of becoming “proven.” Many if not most of the technologies that are likely to be considered options have not yet been proven to be reliable technologies at reasonable cost. Consequently, assessing this prospect, along with a time frame for full development and deployment, would obviously be an important decision criterion. This would not preclude “long-shot” technologies; rather it would ensure that their prospects for success be weighed with other criteria.
  • Ability to reach multi-terawatt scale. Some projections of energy demand suggest that complete decarbonization of the energy system could require 30 terawatts of carbon-free power by mid-century, given current growth patterns.
  • Relevance to Asia and the developing world. Because most of the growth in the developing world will be concentrated in large dense cities, distributed energy sources or those requiring large amounts of land area may have less relevance.
  • Ability to generate firm and dispatchable power. Electrical demands vary widely over time, often fluctuating by a factor of 2 over the course of a single day. Because electricity needs to be generated in a reliable fashion in response to demand, intermittent resources could have less relevance under conditions of deep decarbonization, unless their electrical output can be converted into a firm resource through grid-scale energy storage systems.
  • Potential to reduce costs within a reasonable range of conventional technologies. The less expensive a zero-carbon energy source is and the closer it can be managed down to cost parity with conventional resources such as gas and coal, the more likely it is that it will be rapidly adopted at scale.
  • Private-sector investment. If the private sector is adequately investing in the development or demonstration of a given technology, there would be no need for duplicative government support.
  • Potential to advance U.S. competitiveness. Investments should be sensitive to areas of energy innovation where the United States is well positioned to be a global leader.

To illustrate this further, programs might include the following.

  1. A program to demonstrate multiple CCS technologies, including post-combustion coal, pre-combustion coal, and natural gas combined-cycle technologies at full commercial scale.
  2. A program to develop advanced nuclear reactor designs, including a federal RD&D program capable of addressing each of the fundamental concerns about nuclear power. Particular attention should be given to the potential for small modular reactors (SMRs) and advanced, non–light-water reactors. A key complement to such a program would be the review and, if necessary, reform of Nuclear Regulatory Commission expertise and capabilities to review and license advanced reactor designs.
  3. Augmentation of the Department of Defense’s capabilities to sponsor development, demonstration, and scale-up of advanced energy technology projects that contribute to the military’s national security mission, such as energy security for permanent bases and energy independence for forward bases in war zones.
  4. Continued expansion of international technology innovation programs and transfer of insights from overseas manufacturing processes that have resulted in large capital cost reductions for the United States. In recent years, a number of government-to-government and business–to–nongovernmental organization partnerships have been established to facilitate such technology innovation and transfer efforts.
  5. Consideration of the use of a competitive procurement model, in which government provides funding opportunities for private-sector partners to demonstrate and deploy selective technologies that lack a current market rationale to be commercialized.

Note that this is not intended to be an exhaustive list of the efforts that could be considered, but there should be consideration of new models of public-private cooperation in technology development.

The technology options approach outlined in this paper, with its emphasis on research, development, demonstration, and innovation, serves a different albeit overlapping purpose from deployment programs such as technology portfolio standards, carbon-pricing policies, and feed-in tariffs. The options approach focuses primarily on developing improved and new technologies, whereas deployment programs focus primarily on commercializing proven technologies.

RD&D and deployment policies are generally recognized as being complementary; both would be needed to fully decarbonize the economy unless carbon mitigation was in some way highly valued in the marketplace. In practice, at least to date, technology deployment programs have not successfully commercialized carbon-free technologies in a widespread, cost-effective manner, or offered incentives to continue to innovate and improve the technology. New approaches including the use of market-based pricing mechanisms such as reverse auctions and other competitive procurement methods are likely to be more flexible, economically efficient, and programmatically effective.

Yet deploying new carbon-free technologies on a wide-spread basis over an extended period of time will be a policy challenge until the cost premium has been reduced to a level at which the tradeoffs between short-term certain costs, and long-term uncertain benefits are acceptable to the public. Until then, new deployment programs will be difficult to establish, and if they are established, they are likely to have little material impact (because efforts to constrain program costs would lead these programs to have very limited scopes) or be quickly terminated (due to high program costs), as we have seen with, for example, the U.S. Synthetic Fuels Corporation. Therefore, substantially reducing the cost premium for carbon-free energy must be a priority for both innovation and deployment programs. It is likely to be the fastest and most practical path to create a realistic opportunity to rapidly decarbonize the economy.

Although we are not proposing a specific or complete set of programs in this paper, it is fair to say that our policy approach would involve a substantial increase in energy RD&D spending—an effort that could cost between $15 billion and $25 billion per year, a three- to fivefold increase over recent energy RD&D spending levels.

This is a significant increase over historic levels but modest compared to current funding for medical research (approximately $30 billion per year) and military research (approximately $80 billion per year), in line with previous R&D initiatives over the years (such as the War on Terror, the NIH buildup in the early 2000s, and the Apollo space program), and similar to other recent energy innovation proposals.

The increase in funding would need to be paid for, requiring redirection of existing subsidies, funding a clean energy trust from federal revenues accruing from expanded oil and gas production, a modest “wires charge” on electricity rate payers, or reallocations as part of a larger tax reform effort. We are not suggesting that this would necessarily be easy, only that such investments are necessary and are not out of line with other innovation investment strategies that the nation has adopted, usually with bipartisan support. In this light, we emphasize again the political virtues of a portfolio approach that keeps technological options open and offers additional possible benefits from the potential for enhanced economic competitiveness.

In light of the uncertain but clear risk of severe climate impacts, prudence calls for undertaking some form of risk management. The minimum 50-year time period that will be required to decarbonize the global economy and the effectively irreversible nature of any climate impacts argue for undertaking that effort as soon as reasonably possible. Yet pragmatism requires us to recognize that most of the technologies needed to manage this risk are either substantially more expensive than conventional alternatives or are as yet unproven.

These uncertainties and challenges need not be confounding obstacles to action. Instead, they can be addressed in a sensible way by adopting the broad “portfolio of technology options” approach outlined in this paper; that is, by developing a diverse array of proven technologies (including carbon capture, advanced nuclear, advanced renewable, atmospheric carbon removal, and solar radiation management) and deploying the most successful ones if and when policymakers determine they are needed.

This approach would provide policymakers with greater flexibility to establish policies deploying proven, scalable, and economical technologies. And by placing greater emphasis on reducing the cost of scalable carbon-free technologies, it would allow these technologies to be deployed more quickly, broadly, and cost-effectively than would otherwise be possible. At the same time, it would not be a commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomical.

We believe that this pragmatic portfolio approach should appeal to thoughtful people across the political spectrum, but most notably to conservatives who have been skeptical of an “all-in” approach to climate that fails to acknowledge the uncertainties of both policymaking and climate change. It is at least worth testing whether such an approach might be able to break our current counterproductive deadlock.

David Garman, a principal and managing partner at Decker Garman Sullivan LLC, served as undersecretary in the Department of Energy in the George W. Bush administration. Kerry Emanuel is the Cecil and Ida Green Professor of atmospheric science at the Massachusetts Institute of Technology and codirector of MIT’s Lorenz Center, a climate think tank devoted to basic curiosity-driven climate research. Bruce Phillips is a director of The NorthBridge Group, an economic and strategic consulting firm.

Books

What’s My (Cell) Line?

Cloning Wildlife: Zoos, Captivity, and the Future of Endangered Animals

by Carrie Friese. New York: New York University Press, 2013, 258 pp.

Stewart Brand

What a strange and useful book this is!

It looks like much ado about not much—just three experiments conducted at zoos on cross-species cloning (in banteng, gaur, and African wild-cat). Yet the much-ado is warranted, given the rapid arrival of biotech tools and techniques that may revolutionize conservation with the prospect of precisely targeted genetic rescue for endangered and even extinct species. Carrie Friese’s research was completed before “de-extinction” was declared plausible in 2013, but her analysis applies directly.

First, a note: readers of this review should be aware of two perspectives at work. Friese writes as a sociologist, so expect occasional sentences such as, “Cloned animals are not objects here…. They are ‘figures’ in [Donna] Haraway’s sense of the word, in that they embody ‘material-semiotic nodes or knots in which diverse bodies and meanings coshape one another.’” I write as a proponent of high-tech genetic rescue, being a co-founder of Revive & Restore, a small nonprofit pushing ahead with de-extinction for woolly mammoths and passenger pigeons and with genetic assistance for potentially inbred black-footed ferrets. I’m also the author of a book on ecopragmatism, called Whole Earth Discipline, that Friese quotes approvingly.

Friese is a sharp-eyed researcher. She begins by noting with interest that “in direct contradiction to public enthusiasm surrounding endangered animal cloning, many people in zoos have been rather ambivalent about such technological developments.” Dissecting ambivalence is her joy, I think, because she detects in it revealing indicators of deep debate and the hidden processes by which professions change their mind fundamentally, driven by technological innovation.

The innovation in this case concerns the ability, new in this century, of going beyond same-species cloning (such as with Dolly the sheep) to cross-species cloning. An egg from one species, such as a domestic cow, has its nucleus removed and replaced with the nucleus and nuclear DNA of an endangered species, such as the Javan banteng, a type of wild cow found in Southeast Asia. The egg is grown in vitro to an early-stage embryo and then implanted in the uterus of a cow. When all goes well (it sometimes doesn’t), the pregnancy goes to term, and a new Javan banteng is born. In the case of the banteng, its DNA was drawn from tissue cryopreserved 25 years earlier by San Diego’s Frozen Zoo, in the hope that it could help restore genetic variability to the remaining population of bantengs assumed to be suffering from progressive inbreeding. (At Revive & Restore we are doing something similar with black-footed ferret DNA from the Frozen Zoo.)

91

Now comes the ambivalence. The cloned “banteng” may have the nuclear DNA of a banteng, but its mitochondrial DNA (a lesser but still critical genetic component found outside of the nucleus and passed on only maternally) comes from the egg of a cow. Does that matter? It sure does to zoos, which see their task as maintaining genetically pure species. Zoos treat cloned males, which can pass along only nuclear DNA to future generations, as valuable “bridges” of pure banteng DNA to the banteng gene pool. But cloned female bantengs, with their baggage of cow mitochondrial DNA ready to be passed to their offspring, are deemed valueless hybrids.

Friese describes this view as “genetic essentialism.” It is a byproduct of the “conservation turn” that zoos took in the 1970s. In this shift, zoos replaced their old cages with immersion displays of a variety of animals looking somewhat as if they were in the wild, and they also took on a newly assumed role as repositories of wildlife gene pools to supplant or enrich, if necessary, populations that are threatened in the wild. (The conservation turn not only saved zoos; it pushed them to new levels of popularity. In the United States, 100 million people a year now visit zoos, wildlife parks, and aquariums.)

But in the 1980s some conservation biologists began moving away from focusing just on species to an expanded concern about whole ecosystems and thus about ecological function. They became somewhat relaxed about species purity. When peregrine falcons died out along the East Coast of the United States, conservationists replaced them with hybrid falcons from elsewhere, and the birds thrived. Inbred Florida panthers were saved with an infusion of DNA from Texas cougars. Coyotes, on their travels from west to east, have been picking up wolf genes, and the wolves have been hybridizing with dogs.

As the costs of DNA sequencing keep coming down, field biologists have been discovering that hybridization is rampant in nature and indeed may be one of the principle mechanisms of evolution, which is said to be speeding up in these turbulent decades. Friese notes that “as an institution, the zoo is particularly concerned with patrolling the boundaries between nature and culture.” Defending against cloned hybridization, they think, is defending nature from culture. But if hybridization is common in nature, then what?

Soon enough, zoos will be confronting the temptation of de-extincted woolly mammoths (and passenger pigeons, great auks, and Carolina parakeets, among others). Those thrilling animals could be huge draws, deeply educational, exemplars of new possibilities for conservation. They will also be, to a varying extent, genomic hybrids—mammoths that are partly Asian elephant, passenger pigeons that are partly band-tailed pigeon, great auks that are partly razorbill, Carolina parakeets that are partly sun parakeet. Should we applaud or turn away in dismay? I think that conservation biologists will look for one primary measure of success: Can the revived animals take up their old ecological role and manage on their own in the wild? If not, they are freaks. If they succeed, welcome back.

Friese has written a valuable chronicle of the interaction of wildlife conservation, zoos, and biotech in the first decade of this century. It is a story whose developments are likely to keep surprising us for at least the rest of this century, and she loves that. Her book ends: “Humans should learn to respond well to the surprises that cloned animals create.”

Stewart Brand (sb@longnow.org) is the president of the Long Now Foundation in Sausalito, California.

Climate perceptions

Reason in a Dark Time: Why the Struggle against Climate Change Failed—and What It Means for Our Future

by Dale Jamieson. Oxford University Press, New York, 260 pp.

Elizabeth L. Malone

Did climate change cause Hurricanes Katrina and Sandy? Does a cold, snowy winter disprove climate change? As Dale Jamieson says in Reason in a Dark Time, “These are bad questions and no answer can be given that is not misleading. It is like asking whether when a baseball player gets a base hit, it is caused by his .350 batting average. One cannot say ‘yes,’ but saying ‘no’ falsely suggests that there is no relationship between his batting average and the base hit.” Analogies such as this are a major strength of this book, which both distills and extends the thoughtful analysis that Jamieson has been providing for well over two decades.

I’ve been following Jamieson’s work since the early 1990s, when a group at Pacific Northwest National Laboratory began to assess the social science literature relevant to climate change. Few scholars outside the physical sciences had addressed climate change explicitly; Jamieson, a philosopher, had. His publications on ethics, moral issues, uncertainty, and public policy laid down important arguments captured in Human Choice and Climate Change, which I co-edited with Steve Rayner in 1998. And the arguments are still current and vitally important as society contemplates the failure of all first-best solutions regarding climate change: an effective global agreement to reduce greenhouse gas emissions, vigorous national policies, adequate transfers of technology and other resources from industrialized to less-industrialized countries, and economic efficiency, among others.

In Reason in a Dark Time, Jamieson works steadfastly through the issues. He lays out the larger picture with energy and clarity. He takes us back to the beginning, with the history of scientific discoveries about the greenhouse effect and its emergence as a policy concern through the 1992 Earth Summit’s spirit of high hopefulness and the gradual unraveling of those high hopes by the time of the 2009 Copenhagen Climate Change Conference. He discusses obstacles to action, from scientific ignorance to organized denial to the limitations of our perceptions and abilities in responding to “the hardest problem.” He details two prominent but inadequate approaches to both characterizing the problem of climate change and prescribing solutions: economics and ethics. And finally, he discusses doable and appropriate responses in this “dark world” that has so far failed to agree on and implement effective actions that adequately reflect the scope of the problem.

Well, you may say, we’ve seen this book before. There are lots of books (and articles, both scholarly and mainstream) that give the history, discuss obstacles, criticize the ways the world has been trying to deal with climate change, and give recommendations. And indeed, Jamieson himself draws on his own lengthy publication record.

But you should read this book for its insights. If you are already knowledgeable about the history of climate science and international negotiations, you might skim this discussion. (It’s a good history, though.) All readers will gain from examining the useful and clear distinctions that Jamieson draws regarding climate skepticism, contrarianism, and denialism. Put simply, he sees that “healthy skepticism” questions evidence and views while not denying them; contrarianism may assert outlandish views but is skeptical of all views, including its own outlandish assertions; and denialism quite simply rejects a widely believed and well-supported claim and tries to explain away the evidence for the claim on the basis of conspiracy, deceit, or some rhetorical appeal to “junk science.” And take a look at the table and related text that depict a useful typology of eight frames of science-related issues that relate to climate change: social progress, economic development and competitiveness, morality and ethics, scientific and technical uncertainty, Pandora’s box/ Frankenstein’s monster/runaway science, public accountability and governance, middle way/alternative path, and conflict and strategy.

93

Jamieson’s discussions of the “limits of economics” and the “frontiers of ethics” are also useful. Though they tread much-traveled ground, they take a slightly different slant, starting not with the forecast but the reality of climate change. For instance, the discount rate (how economics values costs in the future) has been the subject of endless critiques, but typically with the goal of coming up with the “right” rate. But Jamieson points out that this is a fruitless endeavor, as social values underlie arguments for almost any discount rate. Thus, the discount rate (and other economic tools) is simply inadequate and, moreover, a mere standin for the real discussion about how society should plan for the future.

Similarly, his discussion of ethics points out that “commonsense morality” cannot “provide ethical guidance with some important aspects of climate-changing behavior”—so it’s not surprising that society has failed to act on climate change. The basis for action is not a matter of choosing appropriate values from some eternal ethical and moral menu, but of evolving values that will be relevant to a climate-changed world in which we make choices about how to adapt to climate change and whether to prevent further climate change—oh, and about whether or not to dabble in planet-altering geoengineering. Ethical and moral revolutions have occurred (e.g., capitalism’s elevation of selfishness), and climate ethicists are breaking new ground in connecting and moralizing about emissions-producing activities and climate change.

Although Jamieson’s explorations do not provide an antidote to the gloom of our dark time, readers will find much to think about here.

He clearly rebuts the argument, for example, that individual actions do not matter, asserting that “What we do matters because of its effects on the world, but what we do also matters because of its effects on ourselves.” Expanding on this thought, he says: “In my view we find meaning in our lives in the context of our relationships to humans, other animals, the rest of nature, and the world generally. This involves balancing such goods as self-expression, responsibility to others, joyfulness, commitment, attunement to reality and openness to new (often revelatory) experiences. What this comes to in the conduct of daily life is the priority of process over product, the journey over the destination, and the doing over what is done.” To my mind, this sounds like the good life that includes respect for nature, temperance, mindfulness, and cooperativeness.

Ultimately, Jamieson turns to politics and policy. As the terms prevention, mitigation, adaptation, and geoengineering have become fuzzy at best, he proposes a new classification of responses to climate change: adaptation (to reduce the negative effects of climate change), abatement (to reduce greenhouse gas emissions), mitigation (to reduce concentrations of greenhouse gases in the atmosphere), and solar radiation management (to alter the Earth’s energy balance). I agree with Jamieson that we need all of the first three and also that we need to be very cautious about “the category formerly known as geoengineering.”

Most of all, we need to live in the world as is, with all its diversity of motives and potential actions, not the dream world imagined at the Earth Summit held in 1992 in Rio de Janeiro. Jamieson gives us seven practical priorities for action (yes, they’ve been said before, but not often in the real-world context that he sketches). And he offers three guiding principles (my favorite is “stop arguing about what is optimal and instead focus on doing what is good,” with “good” encompassing both practical and ethical elements).

I do have some quarrels with the book, starting with the title. In its fullest form, it is unnecessarily wordy and gloomy. And as Jamieson does not talk much of “reason” in the book (nor is there even a definition of the contested term that I could find), why is it displayed so prominently?

More substantively, the gloom that Jamieson portrays is sometimes reinforced by statements that seem almost apocalyptic, such as, “While once particular human societies had the power to upset the natural processes that made their lives and cultures possible, now people have the power to alter the fundamental global conditions that permitted human life to evolve and that continue to sustain it. There is little reason to suppose that our systems of governance are up to the tasks of managing such threats.” But people have historically faced threats (war, disease, overpopulation, the Little Ice Age, among others) that likely seemed to them just as serious, so statements such as Jamieson’s invite the backlash that asserts, well, here we still are and better off, too.

Then there is the question of the intended audience, which Jamieson specifies as “my fellow citizens and…those with whom I have discussed these topics over the years.” But the literature reviews and the heavy use of citations seem to target a narrower academic audience. I would hope that people involved in policymaking and other decisionmaking would not be put off by the academic trappings, but I have my doubts.

If the book finds a wide audience, our global conversation about climate change could become more fruitful. Those who do read it will be rewarded with much to think about in the insights, analogies, and accessible discussions of productive pathways into the climate-changed future.

Elizabeth L. Malone is a staff scientist at the Joint Climate Change Research Institute, a project sponsored by Pacific Northwest National Laboratory and the University of Maryland.

Forum

Evidence-driven policy

In “Advancing Evidence-Based Policymaking to Solve Social Problems” (Issues, Fall 2013), Jeffrey B. Liebman has written an informative and thoughtful article on the potential contribution of empirical analysis to the formation of social policy. I particularly commend his recognition that society faces uncertainty when making policy choices and his acknowledgment that learning what works requires a willingness to try out policies that may not succeed.

He writes “If the government or a philanthropy funds 10 promising early childhood interventions and only one succeeds, and that one can be scaled nationwide, then the social benefits of the overall initiative will be immense.” He returns to reinforce this theme at the end of the article, writing “What is needed is a decade in which we make enough serious attempts at developing scalable solutions that, even if the majority of them fail, we still emerge with a set of proven solutions that work.”

Unfortunately, much policy analysis does not exhibit the caution that Liebman displays. My recent book Public Policy in an Uncertain World observes that analysts often suffer from incredible certitude. Exact predictions of policy outcomes are common, and expressions of uncertainty are rare. Yet predictions are often fragile, with conclusions resting on critical unsupported assumptions or on leaps of logic. Thus, the certitude that is frequently expressed in policy analysis often is not credible.

A disturbing feature of recent policy analysis is that many researchers overstate the informativeness of randomized experiments. It has become common to use two of the terms in the Liebman article—”evidence-based policymaking” and “rigorous evaluation methods”—as code words for such experiments. Randomized experiments sometimes enable one to draw credible policy-relevant conclusions. However, there has been a lamentable tendency of researchers to stress the strong internal validity of experiments and downplay the fact that they often have weak external validity. (An analysis is said to have internal validity if its findings about the study population are credible. It has external validity if one can credibly extrapolate the findings to the real policy problem of interest.)

5

Another manifestation of incredible certitude is that governments produce precise official forecasts of unknown accuracy. A leading case is Congressional Budget Office scoring of the federal debt implications of pending legislation. Scores are not accompanied by measures of uncertainty, even though legislation often proposes complex changes to federal law, whose budgetary implications must be difficult to foresee.

Why do policy analysts express certitude about policy impacts that, in fact, are rather difficult to assess? A proximate answer is that analysts respond to incentives. The scientific community rewards strong, novel findings. The public takes a similar stance, expecting unequivocal policy recommendations. These incentives make it tempting for researchers to maintain assumptions far stronger than they can persuasively defend, in order to draw strong conclusions.

We would be better off we were to face up to the uncertainties that attend policy formation. Some contentious policy debates stem from our failure to admit what we do not know. Credible analysis would make explicit the range of outcomes that a policy might realistically produce. We would do better to acknowledge that we have much to learn than to act as if we already know the truth.

CHARLES F. MANSKI

Board of Trustees Professor in Economics and Fellow of the Institute for Policy Research

Northwestern University

Evanston, Illinois

cfmanski@northwestern.edu

Manski is author of Public Policy in an Uncertain World: Analysis and Decisions (Harvard University Press, 2013).

Model behavior

With “When All Models Are Wrong” (Issues, Winter 2014), Andrea Saltelli and Silvio Funtowicz add to a growing literature of guidance on handling scientific evidence for scientists and policymakers; recent examples include Sutherland, Spiegelhalter, and Burgman’s “Policy: Twenty tips for interpreting scientific claims,” and Chris Tyler’s “Top 20 things scientists need to know about policymaking.” Their particular focus on models is timely as complex issues are of necessity being handled through modeling, prone though models and model users are to misuse and misinterpretation.

Saltelli and Funtowicz provide mercifully few (7, more memorable than 20) “rules,” sensibly presented more as guidance and, in their words, as an adjunct to essential critical vigilance. There is one significant omission; a rule 8 should be “Test models against data”! Rule 1 (clarity) is important in enabling others to understand and gain confidence in a model, although it risks leading to oversimplification; models are used because the world is complex. Rule 3 might more kindly be rephrased as “Detect overprecision”; labeling important economic studies such as the Stern review as “pseudoscience” seems harsh. Although studies of this type can be overoptimistic in terms of what can be said about the future, they can also represent an honest best attempt, within the current state of knowledge (hopefully better than guesswork), rather than a truly pseudoscientific attempt to cloak prejudice in scientific language. Perhaps also, the distinction between prediction and forecasting has not been recognized here; more could also have been made of the policy-valuable role of modeling in exploring scenarios. But these comments should not detract from a useful addition to current guidance.

Alice Aycock

Those visiting New York City’s Park Avenue through July 20th will experience a sort of “creative disruption.” Where one would expect to see only the usual mix of cars, tall buildings, and crowded sidewalks, there will also be larger-than-life white paper-like forms that seem to be blowing down the middle of the street, dancing and lurching in the wind. The sight has even slowed the pace of the city’s infamously harried residents, who cannot resist the invitation to stop and enjoy.

Alice Aycock’s series of seven enormous sculptures in painted aluminum and fiberglass is called “Park Avenue Paper Chase” and stretches from 52nd Street to 66th Street. The forms, inspired by spirals, whirlwinds, and spinning tops, are hardly the normal view on a busy city street. According to Aycock, “I tried to visualize the movement of wind energy as it flowed up and down the avenue creating random whirlpools, touching down here and there and sometimes forming dynamic three-dimensional massing of forms. The sculptural assemblages suggest waves, wind turbulence, turbines, and vortexes of energy…. Much of the energy of the city is invisible. It is the energy of thought and ideas colliding and being transmitted outward. The works are the metaphorical visual residue of the energy of New York City.”

Aycock’s work tends to draw from diverse subjects and ideas ranging from art history to scientific concepts (both current and outdated). The pieces in “Park Avenue Paper Chase” visually reference Russian constructivism while being informed by mathematical phenomena as found in wind and wave currents. Far from forming literal theoretical models, Aycock’s sculptures seem to combine seemingly disjointed ideas together intuitively into forms that make visual sense. Form combined with their placement on Park Avenue work together to disorient the viewer, at least temporarily, to capture the imagination and to challenge perceptions.

Aycock’s art career began in the early 1970s and has included installations at the Museum of Modern Art, San Francisco Art Institute, and the Museum of Contemporary Art, Chicago, as well as installations in many public spaces such as Dulles International Airport, the San Francisco Public Library, and John F. Kennedy International Airport.

JD Talasek

Images courtesy of the artist and Galerie Thomas Schulte and Fine Art Partners, Berlin, Germany. Photos by Dave Rittinger.

7

ALICE AYCOCK, Cyclone Twist (Park Avenue Paper Chase), Painted aluminum, 27′ high × 15′ diameter, Edition of 2, 2013. The sculpture is currently installed at 57th Street on Park Avenue.

8

ALICE AYCOCK, Hoop-La (Park Avenue Paper Chase), Painted aluminum and steel, 19′ high × 17′ wide × 24′ long, Edition of 2, 2014. The sculpture is currently installed at 53rd Street on Park Avenue.

It is interesting to consider why such guidance should be necessary at this time. The need emerges from the inadequacies of undergraduate science education, especially in Britain where school and undergraduate courses are so narrowly focused (unlike the continental baccalaureate which at least includes some philosophy). British undergraduates get little training in the philosophy and epistemology of science. We still produce scientists whose conceptions of “fact” and “truth” remain sturdily Logical Positivist, lacking understanding of the provisional, incomplete nature of scientific evidence. Likewise, teaching about the history and sociology of science is unusual. Few learn the skills of accurate scientific communication to nonscientists. These days, science students may learn about industrial applications of science, but few hear about its role in public policy. Many scientists (not just government advisers) appear to misunderstand the relationship between the conclusions they are entitled to draw about real-world problems and the wider issues involved in formulating and testing ideas about how to respond to them. Even respected scientists often put forward purely technocratic “solutions,” betraying ignorance of the social, economic, and ethical dimensions of problems, and thereby devaluing science advice in the eyes of the public and policymakers.

Saltelli and Funtowicz’ helpful checklist contributes to improving this situation, but we need to make radical improvements to the ways we train our young scientists if we are to bridge the science/policy divide more effectively.

MILES PARKER

Centre for Science and Policy

MIKE BITHELL

Department of Geography

University of Cambridge

Cambridge, UK

Wet drones

In “Sea Power in the Robotic Age” (Issues, Winter 2014), Bruce Berkowitz describes an impressive range of features and potential missions for unmanned maritime systems (UMSs). Although he’s rightly concerned with autonomy in UMSs as an ethical and legal issue, most of the global attention has been on autonomy in unmanned aerial vehicles (UAVs). Here’s why we may be focusing on the wrong robots.

The need for autonomy is much more critical for UMSs. UAVs can communicate easily with satellites and ground stations to receive their orders, but it is notoriously difficult to broadcast most communication signals through liquid water. If unmanned underwater vehicles (UUVs), such as robot submarines, need to surface in order to make a communication link, they will give away their position and lose their stealth advantage. Even unmanned surface vehicles (USVs), or robot boats, that already operate above water face greater challenges than UAVs, such as limited line-of-sight control because of a two-dimensional operating plane, heavy marine weather that can interfere with sensing and communications, more obstacles on the water than in the air, and so on.

All this means that there is a compelling need for autonomy in UMSs, more so than in UAVs. And that’s why truly autonomous capabilities will probably emerge first in UMSs. Oceans and seas also are much less active environments than land or air: There are far fewer noncombatants to avoid underwater. Any unknown submarine, for instance, can reasonably be presumed not to be a recreational vehicle operated by an innocent individual. So UMSs don’t need to worry as much about the very difficult issue of distinguishing lawful targets from unlawful ones, unlike the highly dynamic environments in which UAVs and unmanned ground vehicles (UGVs) operate.

Therefore, there are also lower barriers to deploying autonomous systems in the water than in any other battlespace on Earth. Because the marine environment makes up about 70% of Earth’s surface, it makes sense for militaries to develop UMSs. Conflicts are predicted to increase there, for instance, as Arctic ice melts and opens up strategic shipping lanes that nations will compete for.

Of course, UAVs have been getting the lion’s share of global attention. The aftermath images of UAV strikes are violent and visceral. UAVs tend to have sexy/scary names such as Ion Tiger, Banshee, Panther, and Switchblade, while UMSs have more staid and nondescript names such as Seahorse, Scout, Sapphire, and HAUV-3. UUVs also mostly look like standard torpedoes, in contrast to the more foreboding and futuristic (and therefore interesting) profiles of Predator and Reaper UAVs.

For those and other reasons, UMSs have mostly been under the radar in ethics and law. Yet, as Berkowitz suggests, it would benefit both the defense and global communities to address ethics and law issues in this area in advance of an international incident or public outrage—a key lesson from the current backlash against UAVs. Some organizations, such as the Naval Postgraduate School’s CRUSER consortium, are looking at both applications and risk, and we would all do well to support that research.

PATRICK LIN

Visiting Associate Professor

School of Engineering

Stanford University

Stanford, California

Director and Associate Philosophy Professor

Ethics and Emerging Sciences Group

California Polytechnic State University

San Luis Obispo, California

palin@calpoly.edu

Robots aren’t taking your job

Perhaps a better title for “Anticipating a Luddite Revival” (Issues, Spring 2014) might be “Encouraging a Luddite Revival,” for Stuart Elliot significantly overstates the ability of information technology (IT) innovations to automate work. By arguing that as many as 80% of jobs will be eliminated by technology in as soon as two decades, Elliot is inflaming Luddite opposition.

Elliot does attempt to be scholarly in his methodology to predict the scope of technologically based automation. His review of past issues of IT scholarly journals attempts to understand tech trends, while his analysis of occupation skills data (O-NET) attempts to assess what occupations are amenable to automation.

But his analysis is faulty on several levels. First, to say that a software program might be able to mimic some human work functions (e.g., finding words in a text) is completely different than saying that the software can completely replace a job. Many information-based jobs involve a mix of both routine and nonroutine tasks, and although software-enabled tools might be able to help with routine tasks, they have a much harder time with the nonroutine ones.

Second, many jobs are not information-based but involve personal services, and notwithstanding progress in robotics, we are a long, long way away from robots substituting for humans in this area. Robots are not going to drive the fire truck to your house and put out a fire anytime soon.

10

ALICE AYCOCK, Spin-the-Spin (Park Avenue Paper Chase), Painted aluminum, 18′ high × 15′ wide × 20′ long, Edition of 2, 2014. The sculpture is currently installed at 55th Street on Park Avenue.

Moreover, although it’s easy to say that the middle level O-NET tasks “appear to be roughly comparable to the types of tasks now being described in the research literature,” it’s quite another to give actual examples, other than some frequently cited ones such as software-enabled insurance underwriting. In fact, the problem with virtually all of the “robots are taking our jobs” claims is that they suffer from the fallacy of composition. Proponents look at the jobs that are relatively easy to automate (e.g., travel agents) and assume that: (1) these jobs will all be automated quickly, and (2) all or most jobs fit into this category. Neither is true. We still have over half a million bank tellers (with the Bureau of Labor Statistics predicting an increase in the next 10 years), long after the introduction of ATMs. Moreover, most jobs are actually quite hard to automate, such as maintenance and repair workers, massage therapists, cooks, executives, social workers, nursing home aides, and sales reps, to list just a few.

I am somewhat optimistic that this vision of massive automation may in fact come true perhaps by the end of the century, for it would bring increases in living standards (with no change in unemployment rates). But there is little evidence for Elliot’s claim of “a massive transformation in the labor market over the next few decades.” In fact the odds are much higher that U.S. labor productivity growth will clock in well below 3% per year (the highest rate of productivity the United States ever achieved).

ROBERT ATKINSON

President

Information Technology and Innovation Foundation

Washington, DC

ratkinson@itif.org

Climate change on the right

In Washington, every cause becomes a conduit for special-interest solicitation. Causes that demand greater transfers of wealth and power attract more special interests. When these believers of convenience successfully append themselves to the original cause, it compounds and extends the political support. When it comes to loading up a bill this way, existential causes are the best of all and rightfully should be viewed with greatest skepticism. As Steven E. Hayward notes in “Conservatism and Climate Science” (Issues, Spring 2014), the Waxman-Markey bill was a classic example of special-interest politics run amok.

So conservatives are less skeptical about science than they are about scientific justifications for wealth transfers and losses of liberty. Indeed, Yale professor Dan Kahan found, to his surprise, that self-identified Tea Party members scored better than the population average on a standard test of scientific literacy. Climate policy right-fully elicits skepticism from conservatives, although the skepticism is often presented as anti-science.

Climate activists have successfully and thoroughly confused the climate policy debate. They present the argument this way: (1) Carbon dioxide is a greenhouse gas emitted by human activity; (2) human emissions of carbon dioxide will, without question, lead to environmental disasters of unbearable magnitude; and (3) our carbon policy will effectively mitigate these disasters. The implication swallowed by nearly the entire popular press is that point one (which is true) proves points two and three.

In reality, the connections between points one and two and between points two and three are chains made up of very weak links. The science is so unsettled that even the Intergovernmental Panel on Climate Change (IPCC) cannot choose from among the scores of models it uses to project warming. It hardly matters; the accelerating warming trends that all of them predict are not present in the data (in fact the trend has gone flat for 15 years), nor do the data show any increase in extreme weather from the modest warming of the past century. This provokes the IPCC to argue that the models have not been proven wrong (because their projections are so foggy as to include possible decades of cooling) and that with certain assumptions, some of them predict really bad outcomes.

Not wanting to incur trillions of dollars of economic damage based on these models is not anti-science, which brings us to point three.

Virtually everyone agrees that none of the carbon policies offered to date will have more than a trivial impact on world temperature, even if the worst-case scenarios prove true. So the argument for the policies degenerates to a world of tipping points and climate roulette wheels—there is a chance that this small change will occur at a critical tipping point. That is, the trillions we spend might remove the straw that would break the back of the camel carrying the most valuable cargo. With any other straw or any other camel there would be no impact.

So however unscientific it may seem in the contrived all-or-none climate debate, conservatives are on solid ground to be skeptical.

DAVID W. KREUTZER

Research Fellow in Energy Economics and Climate Change

The Heritage Foundation

Washington, DC

David.kreutzer@heritage.org

Steven E. Hayward claims that the best framework for addressing large-scale disruptions, including climate change, is building adaptive resiliency. If so, why does he not present some examples of what he has in mind, after dismissing building seawalls, moving elsewhere, or installing more air conditioners as defeatist? What is truly defeatist is prioritizing adaptation over prevention, i.e., the reduction of greenhouse gas emissions.

Others concerned with climate change have a different view. As economist William Nordhaus has pointed out (The Climate Casino, Yale University Press, 2013), in areas heavily managed by humans, such as health care and agriculture, adaptation can be effective and is necessary, but some of the most serious dangers, such as ocean acidification and losses of biodiversity, are unmanageable and require mitigation of emissions if humanity is to avoid catastrophe. This two-pronged response combines cutting back emissions with reactively adapting to those we fail to cut back.

Hayward does admit that our capacity to respond to likely “tipping points” is doubtful. Why then does he not see that mitigation is vital and must be pursued far more vigorously than in the past? Nordhaus has estimated that the cost of not exceeding a temperature increase of 2°C might be 1 to 2% of world income if worldwide cooperation could be assured. Surely that is not too high a price for insuring the continuance of human society as we know it!

11

ALICE AYCOCK, Twin Vortexes (Park Avenue Paper Chase), Painted aluminum, 12′ high × 12′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 54th Street on Park Avenue.

12

ALICE AYCOCK, Maelstrom (Park Avenue Paper Chase), Painted aluminum 12′ high × 16′ wide × 67′ long, Edition of 2, 2014. The sculpture is currently installed between 52nd and 53rd Streets on Park Avenue. Detail opposite.

Hayward states that “Conservative skepticism is less about science per se than its claims to usefulness in the policy realm.” But climate change is a policy issue that science has placed before the nations of the world, and science clearly has a useful role in the policy response, both through the technologies of emissions control and by adaptive agriculture and public health measures. To rely chiefly on “adaptive resiliency” and not have a major role for emissions control is to tie one hand behind one’s back.

EVILLE GORHAM

Regents’ Professor of Ecology Emeritus

University of Minnesota

Minneapolis, Minnesota

Steven E. Hayward should be commended for his thoughtful article, in which he explains why political conservatives do not want to confront the challenge of climate change. Nevertheless, the article did not increase my sympathy for the conservative position, and I would like to explain why.

Hayward begins by explaining why appeals to scientific authority alienate conservatives. Science is not an endeavor that anyone must accept on the word of authority. People should feel free to examine and question scientific work and results. But it doesn’t make sense to criticize science without making an effort to thoroughly understand the science first: the hypotheses together with the experiments that attempt to prove them. What too many conservatives do is deny the science out of hand without understanding it well, dismissing it because of a few superficial objections. I read of one skeptic who dismissed global warming because water vapor is a more powerful greenhouse gas than carbon dioxide. That’s true, but someone who thinks through the argument will understand why that doesn’t make carbon dioxide emissions less of a problem. Climate change is a challenge that we may not agree on how to confront, but that doesn’t excuse any of us from thinking it through carefully.

Hayward points out that “the climate enterprise is the largest crossroads of physical and social science ever contemplated.” That may be true, but conservatives don’t separate the two, and they should. If the science is wrong, they need to explain how the data is flawed, or the theory has not taken into account all the variables, or the statistical analysis is incorrect, or that the data admits of more than one interpretation. If the policy prescriptions are wrong, then they need to explain why these prescriptions will not obtain the results we seek or how they will cost more than the benefits they will provide. Then they need to come up with better alternatives. But too many conservatives don’t separate the science from policy, they conflate the two together. They accuse the scientists of being liberals, and then they won’t consider either the science or the policy. That’s just wrong.

Hayward further explains that conservatives “doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power.” I agree with that, but that shouldn’t stop us from trying to prevent serious problems. Hayward’s statement is a powerful argument for caution, but policy often has unintended consequences, and when we’re faced with a threat, we act. We didn’t understand all consequences of entering World War II, building the atomic bomb, passing the Civil Rights Act, inventing Social Security, or going to war in Afghanistan, but we did them because we thought we had to. Then we dealt with the consequences as best we could. Climate change should be no different.

The weakest part of Hayward’s article is his charge that “the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction.” Now I’m not sure that scientists are the monolithic block Hayward makes then out to be (can he point to a poll?). But even if it is true, it is entirely irrelevant. Scientific work always deserves to be evaluated on its own merits, regardless of whatever personal leanings the investigators might have. Good scientific work is objective and verifiable, and if the investigators are allowing their work to be influenced by their personal biases, that should come out in review, especially if many scientific studies of the same phenomenon are being evaluated. The political leanings of the investigators are a very bad reason for ignoring their work.

13

Just a couple of other points. Hayward states that “Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution to be attacked with the typical emissions control policies,” but it is hard to see how the problem of greenhouse gas concentrations in the atmosphere can be resolved any other way. We can’t get a handle on global warming unless we find a way to limit emissions of greenhouse gases (or counterbalance the emissions with sequestration, which will take just as much effort). Emissions control is not just a tactic, it is a central goal, just like fighting terrorism and curing cancer are central goals. We might fail to achieve them, but that shouldn’t happen because of lack of trying. We need to be patient and persevere. If we environmentalists are correct, the evidence will mount, and public opinion will eventually side with us. By beginning to work on emissions control now, we will all be in a better position to move quickly when the political winds shift in our favor.

Hayward’s alternative to an aggressive climate policy is what he calls “building adaptive resiliency,” but he is very vague about what that means. Does he mean that individuals and companies should adapt to climate change on their own, or that governments need to promote resiliency; but if so, how? The point of environmentalists is that even if we are able to adapt to climate change without large loss of life and property, it will be far more expensive then than if we take direct measures to confront the source of the problem—carbon emissions—now. And we really don’t have much time. If the climate scientists are correct, we have only 50 to 100 years before some of the worst effects of climate change start hitting us. Considering the size and complexity of the problem and the degree of cooperation that any serious effort to address climate change will require from all levels of governments, companies, and private individuals, that’s not a lot of time. We had better get moving.

14

ALICE AYCOCK, Waltzing Matilda (Park Avenue Paper Chase), Reinforced fiberglass, 15′ high × 15′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 56th Street on Park Avenue.

Hayward earns our gratitude for helping us better understand how conservatives feel on this important issue. Nevertheless, the conservative movement is full of bright and intelligent people who could be making many valuable contributions and ideas to the climate debate, and they’re not. That’s a real shame.

MICHAEL H. KLEIN

Brooklyn, New York

mhkblogs@gmail.com

Does U.S. science still rule?

In “Is U.S. Science in Decline?” (Issues, Spring 2014), Yu Xie offers a glimpse into the plight of early-career scientists. The gravity of the situation cannot be understated. Many young researchers have become increasingly disillusioned and frustrated about their career trajectory because of declining federal support for basic scientific research.

Apprehension among early-career scientists is rooted in the current fiscal environment. In fiscal year 2013, the National Institutes of Health (NIH) funded fewer than 700 grants due to sequestration: the across-the-board spending cuts that remain an albatross for the entire research community. To put this into context, the success rate for grant applications is now one out of six and may worsen if sequestration is not eliminated. This has left many young researchers rethinking their career prospects. In 1980, close to 18% of all principal investigators (PIs) were age 36 and under, and the percentage has fallen to about 3% in recent years. NIH Director Francis Collins, has said that the federal funding climate for research “keeps me awake at night,” and echoed this sentiment at a recent congressional hearing: “I am worried that the current financial squeeze is putting them [early-career scientists] in particular jeopardy in trying to get their labs started, in trying to take on things that are innovative and risky.” Samantha White, a public policy fellow for Research!America, a nonprofit advocacy alliance, sums up her former career path as a researcher in two words: “anxiety-provoking.” She left bench work temporarily to support research in the policy arena, describing to lawmakers the importance of a strong investment in basic research.

The funding squeeze has left scientists with limited resources and many of them, like White, pursuing other avenues. More than half of academic faculty members and PIs say they have turned away promising new researchers since 2010 because of the minimal growth of the federal science agencies’ budgets, and nearly 80% of scientists say they spend more time writing grant applications, according to a survey by the American Society for Biochemistry and Molecular Biology. Collins lamented this fact in a USA Today article: “We are throwing away probably half of the innovative, talented research proposals that the nation’s finest biomedical community has produced,” he said. “Particularly for young scientists, they are now beginning to wonder if they are in the wrong field. We have a serious risk of losing the most important resource that we have, which is this brain trust, the talent and the creative energies of this generation of scientists.”

U.S. Nobel laureates relied on government funding early in their careers to advance research that helped us gain a better understanding of how to treat and prevent deadly diseases. We could be squandering opportunities for the next generation of U.S. laureates if policymakers fail to make a stronger investment in medical research and innovation.

MIKE COBURN

Chief Operating Officer

Research!America

Alexandria, Virginia

www.researchamerica.org

@ResearchAmerica

Chinese aspirations

Junbo Yu’s article “The Politics Behind China’s Quest for Nobel Prizes” (Issues, Spring 2014) tells an interesting story about how China is applying its strategy for winning Olympic gold to science policy. The story might fit well with the Western stereotype of the Communist bureaucrats, but the real politics of it are more complex and nuanced.

First of all, let’s get the story straight. The article refers to a recent “10,000 Talent” program run by the organizational department of the Chinese Communist Party. It is a major talent development program aimed at selecting and supporting domestic talents in various areas, including scientists, young scholars, entrepreneurs, best teachers, and skilled engineers. The six scientists referred to in Yu’s article were among the 277 people who were identified as the first cohort of the talents. Although there were indeed media reports referring to these scientists as candidates to charge for Nobel Prizes, they were quickly dispelled as media hype and misunderstanding by relevant officials. For example, three of the first six scientists were in research areas that have no relevance to Nobel Prizes at all.

The real political issue is how to balance between talent trained overseas and talent trained domestically. In 2008, China initiated a “1,000 Talent” program aimed at attracting highly skilled Chinese living overseas to return to China. It was estimated that between 1978 and 2011, more than 2 million Chinese students went abroad to study and that only 36.5% of them returned. Although the 1,000 Talent program has been successful in attracting outstanding scholars back to China, it has also generated some unintended consequences.

As part of the recruitment package, the program will give each returnee a one million RMB (equivalent to $160,000) settlement payment. Many of them can also get special grants for research and a salary comparable to what they were paid overseas. This preferential treatment has generated some concern and resentment among those who were trained domestically. They have to compete hard for research grants, and their salaries are ridiculously low. In an Internet survey conducted in China by Yale University, many people expressed their support for the government to attract people to come back from overseas, but felt it was unfair to give people benefits based on where they were trained rather than on how they perform.

In response to these criticisms and concerns, the 10,000 Talent program was developed as a way to focus on domestically trained talent. Instead of going through a lengthy selection process, the program tried to integrate various existing talent programs run by various government agencies.

Although these programs might be useful in the short run, the best way to attract and keep talented people is to create an open, fair, and nurturing environment for people who love research, and to pay them adequately so that they can have a decent life. It is simple and doable in China now, and in the long run it will be much more effective than the 1,000 Talent and 10,000 Talent programs.

LAN XUE

Professor and Dean

School of Public Policy and Management

Tsinghua University

Beijing, China

xue.lan@tsinghua.edu.cn

The idea of China winning a Nobel Prize in science may seem like a stretch to many who understand the critical success factors that drive world-class research at the scientific frontier. Although new reforms in the science and technology (S&T) sector have been introduced since September 2012, the Chinese R&D system continues to be beset by many deep-seated organizational and management issues that need to be overcome if real progress is to be possible. Nonetheless, Junbo Yu’s article reminds us that sometimes there is more to the scientific endeavor than just the work of a select number of scientists toiling away in some well-equipped laboratory.

If we take into account the full array of drivers underlying China’s desire to have a native son win one of these prestigious prizes, we must place national will and determination at the top of the key factors that will determine Chinese success. Yu’s analysis helps remind us just how important national prestige and pride are as factors motivating the behavior of the People’s Republic of China’s leaders in terms of investment and the commitment of financial resources. At times, I wonder whether we here in the United States should pay a bit more deference to these normative imperatives. In a world where competition has become more intense and the asymmetries of the past are giving way to greater parity in many S&T fields, becoming excited about the idea of “winning” or forging a sense of “national purpose” may not be as distorted as perhaps suggested in the article. Too many Americans take for granted the nation’s continued dominance in scientific and technological affairs, when all of the signals are pointing in the opposite direction. In sports, we applaud the team that is able to muster the team spirit and determination to carve out a key victory. Why not in S&T?

That said, where the Chinese leadership may have gone astray in its some-what overheated enthusiasm for securing a Chinese Nobel Prize is its failure to recognize that globalization of the innovation process has made the so-called “scientific Lone Ranger” an obsolete idea. Most innovation efforts today are both transnational and collaborative in nature. China’s future success in terms of S&T advancement will be just as dependent on China’s ability to become a more collaborative nation as it will on its own home-grown efforts.

Certainly, strengthening indigenous innovation in China is an appropriate national objective, but as the landscape of global innovation continues to shift away from individual nation-states and more in the direction of cross-border, cross-functional networks of R&D cooperation, the path to the Nobel Prize for China may be in a different direction than China seems to have chosen. Remaining highly globally engaged and firmly embedded in the norms and values that drive successful collaborative outcomes will prove to be a faster path to the Nobel Prize for Chinese scientists than will working largely from a narrow national perspective. And it also may be the best path for raising the stature and enhancing the credibility of the current regime on the international stage.

DENIS SIMON

Senior Adviser for China & Global Affairs

Foundation Professor of Contemporary

Chinese Affairs

Arizona State University

Tempe, Arizona

denis.simon@asu.edu

Junbo Yu raises a number of interesting, but complex, questions about the current state of science, and science policy, in China. As a reflection of a broad cultural nationalism, many Chinese see the quest for Nobel Prizes in science and medicine as a worthy major national project. For a regime seeking to enhance legitimacy through appeals to nationalism, the use of policy tools by the Party/state to promote this quest is understandable. Although understandable, it may also be misguided. China has many bright, productive scientists who, in spite of the problems of China’s research culture noted by Yu, are capable of Nobel-quality work. They will be recognized with prizes sooner or later, but this will result from the qualities of mind and habit of individual researchers, not national strategy.

The focus on Nobel Prizes detracts from broader questions about scientific development in 21st century China involving tensions between principles of scientific universalism and the social and cultural “shaping” of science and technology in the Chinese setting. The rapid enhancement of China’s scientific and technological capabilities in recent years has occurred in a context where many of the internationally accepted norms of scientific practice have not always been observed. Nevertheless, through international benchmarking, serious science planning, centralized resource mobilization, the abundance of scientific labor available for research services, and other factors, much progress has been made by following a distinctive “Chinese way” of scientific and technological development. The sustainability of this Chinese way, however, is now at issue, as is its normative power for others

Over the past three decades, China has faced a challenge of ensuring that policy and institutional design are kept in phase with a rapidly changing innovation system. Overall, policy adjustments and institutional innovations have been quite successful in allowing China to pass through a series of catch-up stages. However, the challenge of moving beyond catch-up now looms large, especially with regard to the development of policies and institutions to support world class basic research, as Yu suggests. Misapprehension in the minds of political leaders and bureaucrats about the nature of research and innovation in the 21st century may also add to the challenge. The common conflation of “science” and “technology” in policy discourse, as seen in the Chinese term keji (best translated as “scitech”), is indicative. So too is the belief that scientific and technological development remains in essence a national project, mainly serving national political needs, including ultimately national pride and Party legitimacy, as Yu points out.

17

ALICE AYCOCK, Twister 12 feet (Park Avenue Paper Chase), Aluminum, 12′ high × 12′ diameter, Unique edition, 2014. The sculpture is currently installed at 66th Street on Park Avenue.

In 2006, China launched its 15-year “Medium to Long-Term Plan for Scientific and Technological Development” (MLP). Over the past year, the Ministry of Science and Technology has been conducting an extensive midterm evaluation of the Plan. At the same time, as recognized in the ambitious reform agenda of the new Xi Jingping government, the need for significant reforms in the nation’s innovation system, largely overlooked in 2006, have become more evident. There is thus a certain disconnect between the significant resource commitments entailed in the launching of the ambitious MLP and the reality that many of the institutions required for the successful implementation of the plan may not be suitable to the task. The fact that many of the policy assumptions about the role of government in the innovation system that prevailed in 2006 seemingly are not shared by the current government suggests that the politics of Chinese science involve much more than the Nobel Prize quest.

RICHARD P. (PETE) SUTTMEIER

Professor of Political Science, Emeritus

University of Oregon

Eugene, Oregon

petesutt@uoregon.edu

Although it is intriguing in linking the production of a homegrown Nobel science laureate to the legitimacy of the Chinese Communist Party, Junbo Yu’s piece just recasts what I indicated 10 years ago. In a paper entitled “Chinese Science and the ‘Nobel Prize Complex’,” published in Minerva in 2004, I argued that China’s enthusiasm for a Nobel Prize in science since the turn of the century reflects the motivations of China’s political as well as scientific leadership. But “various measures have failed to bring home those who are of the calibre needed to win the Nobel Prize. Yet, unless this happens, it will be a serious blow to China’s political leadership. …So to win a ‘home-grown’ Nobel Prize becomes a face-saving gesture.” “This Nobel-driven enthusiasm has also become part of China’s resurgent nationalism, as with winning the right to host the Olympics,” an analogy also alluded to by Yu.

In a follow-up, “The Universal Values of Science and China’s Nobel Prize Pursuit,” forthcoming again in Minerva, I point out that in China, “science, including the pursuit of the Nobel Prize, is more a pragmatic means to achieve the ends of the political leadership—the national pride in this case—than an institution laden with values that govern its practices.”

As we know, in rewarding those who confer the “greatest benefit on mankind,” the Nobel Prize in science embodies an appreciation and celebration of not merely breakthroughs, discoveries, and creativity, but a universal set of values that are shared and practiced by scientists regardless of nationality or culture.

These core values of truth-seeking, integrity, intellectual curiosity, the challenging of authority, and above all, freedom of inquiry are shared by scientists all over the world. It is recognition of these values that could lead to the findings that may one day land their finders a Nobel Prize.

China’s embrace of science dates back only to the May Fourth Demonstrations in 1919, when scholars, disillusioned with the direction of the new Chinese republic after the fall of the Qing Dynasty, called for a move away from traditional Chinese culture to Western ideals; or as they termed it, a rejection of Mr. Confucius and the acceptance of Mr. Science and Mr. Democracy.

However, these concepts of science and democracy differed markedly from those advocated in the West and were used primarily as vehicles to attack Confucianism. The science championed during the May Fourth move-ment was celebrated not for its Enlightenment values but for its pragmatism, its usefulness.

Francis Bacon’s maxim that “knowledge is power” ran right through Mao Zedong’s view of science after the founding of the People’s Republic in 1949. Science and technology were considered as integral components of nation-building; leading academics contributed their knowledge for the sole purpose of modernizing industry, agriculture, and national defense.

The notion of saving the nation through science during the Nationalist regime has translated into current Communist government policies of “revitalizing the nation with science, technology, and education” and “strengthening the nation through talent.” A recent report by the innovation-promotion organization Nesta characterized China as “an absorptive state,” adding practical value to existing foreign technologies rather than creating new technologies of its own.

This materialistic emphasis reflects the use of science as a means to a political end to make China powerful and prosperous. Rather than arbitrarily picking possible Nobel Prize winners, the Chinese leadership would do well to apply the core values of science to the nurturing of its next generation of scientists. Only when it abandons cold-blooded pragmatism for a value-driven approach to science can it hope to win a coveted Nobel Prize and ascend to real superpower status.

Also, winning a Nobel Prize is completely different from winning a gold medal at the Olympics. Until the creation of an environment conducive to first-rate research and nurturing talent, which cannot be achieved through top-down planning, mobilization, and concentration of resources (the hall-marks of China’s state-sponsored sports program), this Nobel pursuit will continue to vex the Chinese for many years to come.

CONG CAO

Associate Professor and Reader

School of Contemporary Chinese Studies

University of Nottingham

Nottingham, UK

cong.cao@nottingham.ac.uk

The New Visible Hand: Understanding Today’s R&D Management

Perspectives: Rethinking “Science” Communication

CRAIG BOARDMAN

The New Visible Hand: Understanding Today’s R&D Management

Recent decades have seen dramatic if not revolutionary changes in the organization and management of knowledge creation and technology development in U.S. universities. Market demands and public values conjointly influence and in many cases supersede the disciplinary interests of academic researchers in guiding scientific and technological inquiry toward social and economic ends. The nation is developing new institutions to convene diverse sets of actors, including scientists and engineers from different disciplines, institutions, and economic sectors, to focus attention and resources on scientific and technological innovation (STI). These new institutions have materialized in a number of organizational forms, including but not limited to national technology initiatives, science parks, technology incubators, cooperative research centers, proof-of-concept centers, innovation networks, and any number of what the innovation ecosystems literature refers to generically (and in most cases secondarily) as “bridging institutions.”

The proliferation of bridging institutions on U.S. campuses has been met with a somewhat bifurcated response. Critics worry that this new purpose will detract from the educational mission of universities; advocates see an opportunity for universities to make an additional contribution to the nation’s well-being. The evidence so far indicates that bridging institutions on U.S. campuses have not diminished either the educational or knowledge-creation activities. Bridging institutions on U.S. campuses complement rather than substitute for traditional university missions and over time may prove critical pivot points in the U.S. innovation ecosystem.

The growth of bridging institutions is a manifestation of two larger societal trends. The first is that the source of U.S. global competitive advantage in STI is moving away from a simple superiority in certain types of R&D to a need to effectively and strategically manage the output of R&D and integrate it more rapidly into the economy through bridging institutions. The second is the need to move beyond the perennial research policy question of whether or not the STI process is linear, to tackle the more complex problem of how to manage the interweaving of all aspects of STI.

The visible hand

This article’s title harkens back to Alfred Chandler’s landmark book The Visible Hand: The Managerial Revolution in U.S. Business. In that book, Chandler makes the case that the proliferation of the modern multiunit business enterprise was an institutional response to the rapid pace of technological innovation that came with industrialization and increased consumer demand. For Chandler, what was revolutionary was the emergence of management as a key factor of production for U.S. businesses.

Similarly, the proliferation of bridging institutions on U.S. campuses has been an institutional response to the increasing complexity of STI and also to public demand for problem-focused R&D with tangible returns on public research investments. As a result, U.S. departments and agencies supporting intramural and extramural R&D are now very much focused on establishing bridging institutions—and in the case of proof-of-concept centers, bridging institutions for bridging institutions—involving experts from numerous scientific and engineering disciplines from academia, business, and government.

All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

To name just a few, the National Science Foundation (NSF) has created multiple cooperative research center programs and recently added the I-Corps program for establishing regional networks for STI. The Department of Energy (DOE) has its Energy Frontier Research Centers and Energy Innovation Hubs. The National Institutes of Health (NIH) have Translational Research Centers and also what they refer to as “team science.” The Obama administration has its Institutes for Manufacturing Innovation. But this is only a tiny sample. The Research Centers Directory counts more than 8,000 organized research units for STI in the United States and Canada, and over 16,000 worldwide. This total includes many traditional departmental labs, where management is not as critical a factor, but a very large number are bridging institutions created to address management concerns.

The analogy between Chandler’s observations about U.S. business practices and the proliferation of bridging institutions on U.S. campuses is not perfect. Whereas Chandler’s emphasis on management in business had more to do with the efficient production and distribution of routine and standard consumer goods and services, the proliferation of bridging institutions on U.S. campuses has had more to do with effective and commercially viable (versus efficient) knowledge creation and technology development, which cannot be routinized by way of management in the same way as can, say, automobile manufacturing.

Nevertheless, management—albeit a less formal kind of management than that Chandler examines—is now undeniably a key factor of production for STI on U.S. campuses. Many nations are catching up with the United States in the percentage of their gross domestic product devoted to R&D, so that R&D alone will not be sufficient to sustain U.S. leadership. The promotion of organizational cultures enabling bridging institutions to strategically manage social network ties among diverse sets of scientists and engineers toward coordinated problem-solving is what will help the United States maintain global competitive advantage in STI.

Historically, U.S. research policy has focused on two things with regard to universities to help ensure the U.S. status as the global STI hegemon. First, it has made sure that U.S. universities have had all the usual “factors of production” for STI, e.g., funding, technology, critical materials, infrastructure, and the best and the brightest in terms of human capital. Second, U.S. research policy has encouraged university R&D in applied fields by, for example, allowing universities to obtain intellectual property rights emerging from publicly funded R&D. In the past, then, an underlying assumption of U.S. research policy was that universities are capable of and willing to conduct problem-focused R&D and to bring the fruits of that research to market if given the funds and capital to do the R&D, as well as ownership of any commercial outputs.

But U.S. research policy regarding universities has been imitated abroad, and for this reason, among others, many countries have closed the STI gap with the United States, at least in particular technology areas. One need read only one or both of the National Academies’ Gathering Storm volumes to learn that the U.S. is now on a more level playing field with China, Japan, South Korea, and the European Union in terms of R&D spending in universities, academic publications and publication quality, academic patents and patent quality, doctorate production, and market share in particular technology areas. Quibbles with the evidentiary bases of the Gathering Storm volumes notwithstanding, there is little arguing that the United States faces increased competition in STI from abroad.

Although the usual factors of production for STI and property rights should remain components of U.S. research policy, these are no longer adequate to sustain U.S. competitive advantage. Current and future U.S. research policy for universities must emphasize factors of production for STI that are less easily imitated, namely organizational cultures in bridging institutions that are conducive to coordinated problem-solving. An underlying assumption of U.S. research should be that universities for the most part cannot or will not go it alone commercially even if given the funds, capital, and property rights to do so (there are exceptions, of course), but rather that they are more likely to navigate the “valley of death” in conjunction with businesses, government, and other universities.

Encouraging cross-sector, inter-institutional R&D in the national interest must become a major component of U.S. research policy for universities, and bridging institutions must play a central role. Anecdotal reports suggest that bridging institutions differ widely in their effectiveness, but one of the challenges facing the nation is to better understand the role that management plays in the success of bridging institutions. Calling something a bridging institution does not guarantee that it will make a significant contribution to meeting STI goals.

The edge of the future

The difference between the historic factors of production for STI discussed above and organizational cultures in bridging institutions is that the former are static, simple, and easy to imitate, whereas the latter are dynamic, complex, and difficult to observe, much less copy. This is no original insight. The business literature made this case originally in the 1980s and 1990s. A firm’s intangible assets, its organizational culture and the tacit norms and expectations for organizational behavior that this entails, can be and oftentimes are a source of competitive advantage because they are difficult to measure and thus hard for competing firms to emulate.

University leaders and scholars have recognized that bridging institutions on U.S. campuses can be challenging to organize and manage and that the ingredients for an effective organizational culture are still a mystery. There is probably as much literature on the management challenges of bridging institutions as there is on their performance. Whereas the management of university faculty in traditional academic departments is commonly referred to as “herding cats,” coordinating faculty from different disciplines and universities, over whom bridging institutions have no line authority, to work together and also to cooperate with industry and government is akin to herding feral cats.

But beyond this we know next to nothing about the organizational cultures of bridging institutions. The cooperative research centers and other types of bridging institutions established by the NSF, DOE, NIH, and other agencies are most often evaluated for their knowledge and technology outcomes and, increasingly, for their social and economic impact, but seldom have research and evaluation focused on what’s inside the black box. All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

Developing an understanding of organizational cultures in bridging institutions is important not just because these can be relatively tacit and difficult to imitate, but additionally because other, more formal aspects of the management of bridging institutions are less manipulable. Unlike Chandler’s emphasis on formal structures and authorities in U.S. businesses, bridging institutions do not have many layers of hierarchy, nor do they have centralized decisionmaking. As organizations focused on new knowledge creation and technology development, bridging institutions typically are flat and decentralized, and therefore vary much more culturally and informally than structurally.

There are frameworks for deducing the organizational cultures of bridging institutions. One is the competing values framework developed by Kim Cameron and Robert Quinn. Another is organizational economics’ emphasis on informal mechanisms such as resource interdependencies and goal congruence. A third framework is the organizational capital approach from strategic human resources management. These frameworks have been applied in the business literature to explore the differences between Silicon Valley and Route 128 microcomputer companies, and they can be adapted for use in comparing the less formal structures of bridging institutions.

What’s more, U.S. research policy must take into account how organizational cultures in bridging institutions interact with “best practices.” We know that in some instances, specific formalized practices are associated with successful STI in bridging institutions, but in many other cases, these same practices are followed in unsuccessful institutions. For bridging institutions, best practices may be best only in combination with particular types of organizational culture.

Inside the black box

The overarching question that research policy scholars and practitioners should address is what organizational cultures lead to different types of STI in different types of bridging institutions. Most research on bridging institutions emphasizes management challenges and best practices, and the literature on organizational culture is limited. We need to address in a systematic fashion how organizational cultures operate to coordinate diverse sets of scientists and engineers toward coordinated problem-solving.

Specifically, research policy scholars and practitioners should address variation across the “clan” type of organizational culture in bridging institutions. To the general organizational scholar, all bridging institutions have the same culture: decentralized and nonhierarchical. But to research policy scholars and practitioners, there are important differences in the organization and management of what essentially amounts to collectives of highly educated volunteers. How is it that some bridging institutions elicit tremendous contributions from academic faculty and industry researchers, whereas others do not? What aspects of bridging institutions explain what enables academic researchers to work with private companies, spin off their own companies, and/or patent?

These questions point to related questions about different types of bridging institutions. There are research centers emphasizing university/industry interactions for new and existing industries, university technology incubators and proof-of-concept centers focused on business model development and venture capital, regional network nodes for STI, and university science parks co-locating startups and university faculty. Which of these bridging institutions are most appropriate for which sorts of STI? When should bridging institutions be interdisciplinary, cross-sectoral, or both? Are the different types of bridging institutions complements or substitutes for navigating the “valley of death?”

Research policy scholars and practitioners have their work cut out for them. There are no general data tracking cultural heterogeneity across bridging institutions. What data do exist, such as the Research Centers Directory compiled by Gale Research/Cengage Learning, track only the most basic organizational features. Other approaches such as the science of team science hold more promise, though much of this work emphasizes best practices and does not address organizational culture systematically. Research policy scholars and practitioners must develop new data sets that track the intangible cultural aspects of bridging institutions and connect these data to publicly available outcomes data for new knowledge creation, technology development, and workforce development.

Developing systematic understanding of bridging institutions is fundamental to U.S. competitiveness in STI. It is fundamental because bridging institutions are where the rubber hits the road in the U.S. innovation ecosystem. Bridging institutions provide forums for our nation’s top research universities, firms, and government agencies to exchange ideas, engage in coordinated problem solving, and in turn create new knowledge and develop new technologies addressing social and economic problems.

Developing systematic understanding of bridging institutions will be challenging because they are similar on the surface but different in important ways that are difficult to detect. During the 1980s, scholars identified striking differences in the organizational cultures of Silicon Valley and Route 128 microcomputing companies. Today, most bridging institutions follow a similar decentralized model for decisionmaking, with few formalized structures and authorities, yet they can differ widely in performance.

The most important variation across bridging institutions is to be found in the intangible, difficult–to-imitate qualities that allow for (or preclude) the coordination of diverse sets of scientists and engineers from across disciplines, institutions, and sectors. But this does not mean that scholars and practitioners should ignore the structural aspects of bridging institutions. In some cases, bridging institutions may exercise line authority over academic faculty (such as faculty with joint appointments), and these organizations may (or may not) outperform similar bridging institutions that do not exercise line authority.


Craig Boardman () is associate director of the Battelle Center for Science and Technology Policy in the John Glenn School of Public Affairs at Ohio State University.


Eagle

Eagle

GREGORY BENFORD

EAGLE

The long, fat freighter glided into the harbor at late morning—not the best time for a woman who had to keep out of sight.

The sun slowly slid up the sky as tugboats drew them into Anchorage. The tank ship, a big sectioned VLCC, was like an elephant ballerina on the stage of a slate-blue sea, attended by tiny dancing tugs.

Now off duty, Elinor watched the pilot bring them in past the Nikiski Narrows and slip into a long pier with gantries like skeletal arms snaking down, the big pump pipes attached. They were ready for the hydrogen sulfide to flow. The ground crew looked anxious, scurrying around, hooting and shouting. They were behind schedule.

Inside, she felt steady, ready to destroy all this evil stupidity.

She picked up her duffel bag, banged a hatch shut, and walked down to the shore desk. Pier teams in gasworkers’ masks were hooking up pumps to offload and even the faint rotten egg stink of the hydrogen sulfide made her hold her breath. The Bursar checked her out, reminding her to be back within 28 hours. She nodded respectfully, and her maritime ID worked at the gangplank checkpoint without a second glance. The burly guy there said something about hitting the bars and she wrinkled her nose. “For breakfast?”

“I seen it, ma’am,” he said, and winked.

She ignored the other crew, solid merchant marine types. She had only used her old engineer’s rating to get on this freighter, not to strike up the chords of the Seamen’s Association song.

She hit the pier and boarded the shuttle to town, jostling onto the bus, anonymous among boat crews eager to use every second of shore time. Just as she’d thought, this was proving the best way to get in under the security perimeter. No airline manifest, no Homeland Security ID checks. In the unloading, nobody noticed her, with her watch cap pulled down and baggy jeans. No easy way to even tell she was a woman.

Now to find a suitably dingy hotel. She avoided central Anchorage and kept to the shoreline, where small hotels from the TwenCen still did business. At a likely one on Sixth Avenue, the desk clerk told her there were no rooms left.

“With all the commotion at Elmendorf, ever’ damn billet in town’s packed,” the grizzled guy behind the counter said.

She looked out the dirty window, pointed. “What’s that?”

“Aw, that bus? Well, we’re gettin’ that ready to rent, but—”

“How about half price?”

“You don’t want to be sleeping in that—”

“Let me have it,” she said, slapping down a $50 bill.

“Uh, well.” He peered at her. “The owner said—”

“Show it to me.”

She got him down to $25 when she saw that it really was a “retired bus.” Something about it she liked, and no cops would think of looking in the faded yellow wreck. It had obviously fallen on hard times after it had served the school system.

It held a jumble of furniture, apparently to give it a vaguely homelike air. The driver’s seat and all else were gone, leaving holes in the floor. The rest was an odd mix of haste and taste. A walnut Victorian love seat with a medallion backrest held the center, along with a lumpy bed. Sagging upholstery and frayed cloth, cracked leather, worn wood, chipped veneer, a radio with the knobs askew, a patched-in shower closet, and an enamel basin toilet illuminated with a warped lamp completed the sad tableau. A generator chugged outside as a clunky gas heater wheezed. Authentic, in its way.

Restful, too. She pulled on latex gloves the moment the clerk left, and took a nap, knowing she would not soon sleep again. No tension, no doubts. She was asleep in minutes.

Time for the recon. At the rental place she’d booked, she picked up the wastefully big Ford SUV. A hybrid, though. No problem with the credit card, which looked fine at first use, then erased its traces with a virus that would propagate in the rental system, snipping away all records.

The drive north took her past the air base but she didn’t slow down, just blended in with late afternoon traffic. Signs along the highway now had to warn about polar bears, recent migrants to the land and even more dangerous than the massive local browns. The terrain was just as she had memorized it on Google Earth, the likely shooting spots isolated, thickly wooded. The Internet maps got the seacoast wrong, though. Two Inuit villages had recently sprung up along the shore within Elmendorf, as one of their people, posing as a fisherman, had observed and photographed. Studying the pictures, she’d thought they looked slightly ramshackle, temporary, hastily thrown up in the exodus from the tundra regions. No need to last, as the Inuit planned to return north as soon as the Arctic cooled. The makeshift living arrangements had been part of the deal with the Arctic Council for the experiments to make that possible. But access to post schools, hospitals, and the PX couldn’t make this home to the Inuit, couldn’t replace their “beautiful land,” as the word used by the Labrador peoples named it.

So, too many potential witnesses there. The easy shoot from the coast was out. She drove on. The enterprising Inuit had a brand new diner set up along Glenn Highway, offering breakfast anytime to draw odd-houred Elmendorf workers, and she stopped for coffee. Dark men in jackets and jeans ate solemnly in the booths, not saying much. A young family sat across from her, the father trying to eat while bouncing his small wiggly daughter on one knee, the mother spooning eggs into a gleefully uncooperative toddler while fielding endless questions from her bespectacled school-age son. The little girl said something to make her father laugh, and he dropped a quick kiss on her shining hair. She cuddled in, pleased with herself, clinging tight as a limpet.

They looked harried but happy, close-knit and complete. Elinor flashed her smile, tried striking up conversations with the tired, taciturn workers, but learned nothing useful from any of them.

Going back into town, she studied the crews working on planes lined up at Elmendorf. Security was heavy on roads leading into the base so she stayed on Glenn. She parked the Ford as near the railroad as she could and left it. Nobody seemed to notice.

At seven, the sun still high overhead, she came down the school bus steps, a new creature. She swayed away in a long-skirted yellow dress with orange Mondrian lines, her shoes casual flats, carrying a small orange handbag. Brushed auburn hair, artful makeup, even long artificial eyelashes. Bait.

She walked through the scruffy district off K Street, observing as carefully as on her morning reconnaissance. The second bar was the right one. She looked over her competition, reflecting that for some women, there should be a weight limit for the purchase of spandex. Three guys with gray hair were trading lies in a booth and checking her out. The noisiest of them, Ted, got up to ask her if she wanted a drink. Of course she did, though she was thrown off by his genial warning, “Lady, you don’t look like you’re carryin’.”

Rattled—had her mask of harmless approachability slipped?—she made herself smile, and ask, “Should I be?”

“Last week a brown bear got shot not two blocks from here, goin’ through trash. The polars are bigger, meat-eaters, chase the young males out of their usual areas, so they’re gettin’ hungry, and mean. Came at a cop, so the guy had to shoot it. It sent him to the ICU, even after he put four rounds in it.” Not the usual pickup line, but she had them talking about themselves. Soon, she had most of what she needed to know about SkyShield.

“We were all retired refuel jockeys,” Ted said. “Spent most of 30 years flyin’ up big tankers full of jet fuel, so fighters and B-52s could keep flyin’, not have to touch down.”

Elinor probed, “So now you fly—”

“Same aircraft, most of ’em 40 years old—KC Stratotankers, or Extenders—they extend flight times, y’see.”

His buddy added, “The latest replacements were delivered just last year, so the crates we’ll take up are obsolete. Still plenty good enough to spray this new stuff, though.”

“I heard it was poison,” she said.

“So’s jet fuel,” the quietest one said. “But it’s cheap, and they needed something ready to go now, not that dust-scatter idea that’s still on the drawing board.”

Ted snorted. “I wish they’d gone with dustin’—even the traces you smell when they tank up stink like rottin’ eggs. More than a whiff, though, and you’re already dead. God, I’m sure glad I’m not a tank tech.”

“It all starts tomorrow?” Elinor asked brightly.

“Right, 10 KCs takin’ off per day, returnin’ the next from Russia. Lots of big-ticket work for retired duffers like us.”

“Who’re they?” she asked, gesturing to the next table. She had overheard people discussing nozzles and spray rates. “Expert crew,” Ted said. “They’ll ride along to do the measurements of cloud formation behind us, check local conditions like humidity and such.”

She eyed them. All very earnest, some a tad professorial. They were about to go out on an exciting experiment, ready to save the planet, and the talk was fast, eyes shining, drinks all around.

“Got to freshen up, boys.” She got up and walked by the tables, taking three quick shots in passing of the whole lot of them, under cover of rummaging through her purse. Then she walked around a corner toward the rest rooms, and her dress snagged on a nail in the wooden wall. She tried to tug it loose, but if she turned to reach the snag, it would rip the dress further. As she fished back for it with her right hand, a voice said, “Let me get that for you.”

Not a guy, but one of the women from the tech table. She wore a flattering blouse with comfortable, well-fitted jeans, and knelt to unhook the dress from the nail head.

“Thanks,” Elinor said, and the woman just shrugged, with a lopsided grin.

“Girls should stick together here,” the woman said. “The guys can be a little rough.”

“Seem so.”

“Been here long? You could join our group—always room for another woman, up here! I can give you some tips, introduce you to some sweet, if geeky, guys.”

“No, I… I don’t need your help.” Elinor ducked into the women’s room.

She thought about this unexpected, unwanted friendliness while sitting in the stall, and put it behind her. Then she went back into the game, fishing for information in a way she hoped wasn’t too obvious. Everybody likes to talk about their work, and when she got back to the pilots’ table, the booze worked in her favor. She found out some incidental information, probably not vital, but it was always good to know as much as you could. They already called the redesigned planes “Scatter Ships” and their affection for the lumbering, ungainly aircraft was reflected in banter about unimportant engineering details and tales of long-ago combat support missions.

One of the big guys with a wide grin sliding toward a leer was buying her a second martini when her cell rang.

“Albatross okay. Our party starts in 30 minutes,” said a rough voice. “You bring the beer.”

She didn’t answer, just muttered, “Damned salesbots…,” and disconnected.

She told the guy she had to “tinkle,” which made him laugh. He was a pilot just out of the Air Force, and she would have gone for him in some other world than this one. She found the back exit—bars like this always had one—and was blocks away before he would even begin to wonder.

THIS MIGHT BE THE LAST TIME SHE WOULD SEE SUCH ABUNDANT, GLOWING LIFE,

Anchorage slid past unnoticed as she hurried through the broad deserted streets, planning. Back to the bus, out of costume, into all-weather gear, boots, grab some trail mix and an already-filled backpack. Her thermos of coffee she wore on her hip.

She cut across Elderberry Park, hurrying to the spot where her briefing said the trains paused before running into the depot. The port and rail lines snugged up against Elmendorf Air Force Base, convenient for them, and for her.

The freight train was a long clanking string and she stood in the chill gathering darkness, wondering how she would know where they were. The passing autorack cars had heavy shutters, like big steel Venetian blinds, and she could not see how anybody got into them.

But as the line clanked and squealed and slowed, a quick laser flash caught her, winked three times. She ran toward it, hauling up onto a slim platform at the foot of a steel sheet.

It tilted outward as she scrambled aboard, thudding into her thigh, nearly knocking her off. She ducked in and saw by the distant streetlights the vague outlines of luxury cars. A Lincoln sedan door swung open. Its interior light came on and she saw two men in the front seats. She got in the back and closed the door. Utter dark.

“It clear out there?” the cell phone voice asked from the driver’s seat.

“Yeah. What—”

“Let’s unload. You got the SUV?”

“Waiting on the nearest street.”

“How far?”

“Hundred meters.”

The man jigged his door open, glanced back at her. “We can make it in one trip if you can carry 20 kilos.”

“Sure,” though she had to pause to quickly do the arithmetic, 44 pounds. She had backpacked about that much for weeks in the Sierras. “Yeah, sure.”

The missile gear was in the trunks of three other sedans, at the far end of the autorack. As she climbed out of the car the men had inhabited, she saw the debris of their trip—food containers in the back seats, assorted junk, the waste from days spent coming up from Seattle. With a few gallons of gas in each car, so they could be driven on and off, these two had kept warm running the heater. If that ran dry, they could switch to another.

As she understood it, this degree of mess was acceptable to the railroads and car dealers. If the railroad tried to wrap up the autoracked cars to keep them out, the bums who rode the rails would smash windshields to get in, then shit in the cars, knife the upholstery. So they had struck an equilibrium. That compromise inadvertently produced a good way to ship weapons right by Homeland Security. She wondered what Homeland types would make of a Dart, anyway. Could they even tell what it was?

The rough-voiced man turned and clicked on a helmet lamp. “I’m Bruckner. This is Gene.”

Nods. “I’m Elinor.” Nods, smiles. Cut to the chase. “I know their flight schedule.”

Bruckner smiled thinly. “Let’s get this done.”

Transporting the parts in via autoracked cars was her idea. Bringing them in by small plane was the original plan, but Homeland might nab them at the airport. She was proud of this slick workaround.

“Did railroad inspectors get any of you?” Elinor asked.

Gene said, “Nope. Our two extras dropped off south of here. They’ll fly back out.”

With the auto freights, the railroad police looked for tramps sleeping in the seats. No one searched the trunks. So they had put a man on each autorack, and if some got caught, they could distract from the gear. The men would get a fine, be hauled off for a night in jail, and the shipment would go on.

“Luck is with us,” Elinor said. Bruckner looked at her, looked more closely, opened his mouth, but said nothing.

They both seemed jumpy by the helmet light. “How’d you guys live this way?” she asked, to get them relaxed.

“Pretty poorly,” Gene said. “We had to shit in bags.”

She could faintly smell the stench. “More than I need to know.”

Using Bruckner’s helmet light they hauled the assemblies out, neatly secured in backpacks. Bruckner moved with strong, graceless efficiency. Gene too. She hoisted hers on, grunting.

The freight started up, lurching forward. “Damn!” Gene said.

They hurried. When they opened the steel flap, she hesitated, jumped, stumbled on the gravel, but caught herself. Nobody within view in the velvet cloaking dusk.

AND SHE SUCKED IT IN, TRYING TO LODGE IT IN HER HEART FOR TIMES TO COME.

They walked quietly, keeping steady through the shadows. It got cold fast, even in late May. At the Ford they put the gear in the back and got in. She drove them to the old school bus. Nobody talked.

She stopped them at the steps to the bus. “Here, put these gloves on.”

They grumbled but they did it. Inside, heater turned to high, Bruckner asked if she had anything to drink. She offered bottles of vitamin water but he waved it away. “Any booze?”

Gene said, “Cut that out.”

The two men eyed each other and Elinor thought about how they’d been days in those cars and decided to let it go. Not that she had any liquor, anyway.

Bruckner was lean, rawboned, and self-contained, with minimal movements and a constant, steady gaze in his expressionless face. “I called the pickup boat. They’ll be waiting offshore near Eagle Bay by eight.”

Elinor nodded. “First flight is 9:00 a.m.. It’ll head due north so we’ll see it from the hills above Eagle Bay.”

Gene said, “So we get into position… when?”

“Tonight, just after dawn.”

Bruckner said, “I do the shoot.”

“And we handle perimeter and setup, yes.”

“How much trouble will we have with the Indians?”

Elinor blinked. “The Inuit settlement is down by the seashore. They shouldn’t know what’s up.”

Bruckner frowned. “You sure?”

“That’s what it looks like. Can’t exactly go there and ask, can we?”

Bruckner sniffed, scowled, looked around the bus. “That’s the trouble with this nickel-and-dime operation. No real security.”

Elinor said, “You want security, buy a bond.”

Bruckner’s head jerked around. “Whassat mean?”

She sat back, took her time. “We can’t be sure the DARPA people haven’t done some serious public relations work with the Natives. Besides, they’re probably all in favor of SkyShield anyway—their entire way of life is melting away with the sea ice. And by the way, they’re not “Indians,” they’re “Inuit.”

“You seem pretty damn sure of yourself.”

“People say it’s one of my best features.”

Bruckner squinted and said, “You’re—”

“A maritime engineering officer. That’s how I got here and that’s how I’m going out.”

“You’re not going with us?”

“Nope, I go back out on my ship. I have first engineering watch tomorrow, 0100 hours.” She gave him a hard, flat look. “We go up the inlet, past Birchwood Airport. I get dropped off, steal a car, head south to Anchorage, while you get on the fishing boat, they work you out to the headlands. The bigger ship comes in, picks you up. You’re clear and away.”

Bruckner shook his head. “I thought we’d—”

“Look, there’s a budget and—”

“We’ve been holed up in those damn cars for—”

“A week, I know. Plans change.”

“I don’t like changes.”

“Things change,” Elinor said, trying to make it mild.

But Bruckner bristled. “I don’t like you cutting out, leaving us—”

“I’m in charge, remember.” She thought, He travels the fastest who travels alone.

“I thought we were all in this together.”

She nodded. “We are. But Command made me responsible, since this was my idea.”

His mouth twisted. “I’m the shooter, I—”

“Because I got you into the Ecuador training. Me and Gene, we depend on you.” Calm, level voice. No need to provoke guys like this; they did it enough on their own.

Silence. She could see him take out his pride, look at it, and decide to wait a while to even the score.

Bruckner said, “I gotta stretch my legs,” and clumped down the steps and out of the bus.

Elinor didn’t like the team splitting and thought of going after him. But she knew why Bruckner was antsy—too much energy with no outlet. She decided just to let him go.

To Gene she said, “You’ve known him longer. He’s been in charge of operations like this before?”

Gene thought. “There’ve been no operations like this.”

“Smaller jobs than this?”

“Plenty.”

She raised her eyebrows. “Surprising.”

“Why?”

“He walks around using that mouth, while he’s working?”

Gene chuckled. “ ’Fraid so. He gets the job done though.”

“Still surprising.”

That he’s the shooter, or—”

“That he still has all his teeth.

While Gene showered, she considered. Elinor figured Bruckner for an injustice collector, the passive-aggressive loser type. But he had risen quickly in The LifeWorkers, as they called themselves, brought into the inner cadre that had formulated this plan. Probably because he was willing to cross the line, use violence in the cause of justice. Logically, she should sympathize with him, because he was a lot like her.

But sympathy and liking didn’t work that way.

There were people who soon would surely yearn to read her obituary, and Bruckner’s too, no doubt. He and she were the cutting edge of environmental activism, and these were desperate times indeed. Sometimes you had to cross the line, and be sure about it.

Elinor had made a lot of hard choices. She knew she wouldn’t last long on the scalpel’s edge of active environmental justice, and that was fine by her. Her role would soon be to speak for the true cause. Her looks, her brains, her charm—she knew she’d been chosen for this mission, and the public one afterward, for these attributes, as much as for the plan she had devised. People listen, even to ugly messages, when the face of the messenger is pretty. And once they finished here, she would have to be heard.

She and Gene carefully unpacked the gear and started to assemble the Dart. The parts connected with a minimum of wiring and socket clasps, as foolproof as possible. They worked steadily, assembling the tube, the small recoil-less charge, snapping and clicking the connections.

Gene said, “The targeting antenna has a rechargeable battery, they tend to drain. I’ll top it up.”

She nodded, distracted by the intricacies of a process she had trained for a month ago. She set the guidance system. Tracking would first be infrared only, zeroing in on the target’s exhaust, but once in the air and nearing its goal, it would use multiple targeting modes—laser, IR, advanced visual recognition—to get maximal impact on the main body of the aircraft.

They got it assembled and stood back to regard the linear elegance of the Dart. It had a deadly, snakelike beauty, its shiny white skin tapered to a snub point.

“Pretty, yeah,” Gene said. “And way better than any Stinger. Next generation, smarter, near four times the range.”

She knew guys liked anything that could shoot, but to her it was just a tool. She nodded.

Gene caressed the lean body of the Dart, and smiled.

Bruckner came clumping up the bus stairs with a fixed smile on his face that looked like it had been delivered to the wrong address. He waved a lit cigarette. Elinor got up, forced herself to smile. “Glad you’re back, we—”

“Got some ’freshments,” he said, dangling some beers in their six-pack plastic cradle, and she realized he was drunk.

The smile fell from her face like a picture off a wall.

She had to get along with these two, but this was too much. She stepped forward, snatched the beer bottles and tossed them onto the Victorian love seat. “No more.”

Bruckner tensed and Gene sucked in a breath. Bruckner made a move to grab the beers and Elinor snatched his hand, twisted the thumb back, turned hard to ward off a blow from his other hand—and they froze, looking into each other’s eyes from a few centimeters away.

Silence.

Gene said, “She’s right, y’know.”

More silence.

Bruckner sniffed, backed away. “You don’t have to be rough.”

“I wasn’t.”

They looked at each other, let it go.

She figured each of them harbored a dim fantasy of coming to her in the brief hours of darkness. She slept in the lumpy bed and they made do with the furniture. Bruckner got the love seat—ironic victory—and Gene sprawled on a threadbare comforter.

Bruckner talked some but dozed off fast under booze, so she didn’t have to endure his testosterone-fueled patter. But he snored, which was worse.

The men napped and tossed and worried. No one bothered her, just as she wanted it. But she kept a small knife in her hand, in case. For her, sleep came easily.

After eating a cold breakfast, they set out before dawn, 2:30 a.m., Elinor driving. She had decided to wait till then because they could mingle with early morning Air Force workers driving toward the base. This far north, it started brightening by 3:30, and they’d be in full light before 5:00. Best not to stand out as they did their last reconnaissance. It was so cold she had to run the heater for five minutes to clear the windshield of ice. Scraping with her gloved hands did nothing.

The men had grumbled about leaving absolutely nothing behind. “No traces,” she said. She wiped down every surface, even though they’d worn medical gloves the whole time in the bus.

Gene didn’t ask why she stopped and got a gas can filled with gasoline, and she didn’t say. She noticed the wind was fairly strong and from the north, and smiled. “Good weather. Prediction’s holding up.”

Bruckner said sullenly, “Goddamn cold.”

“The KC Extenders will take off into the wind, head north.” Elinor judged the nearly cloud-free sky. “Just where we want them to be.”

They drove up a side street in Mountain View and parked overlooking the fish hatchery and golf course, so she could observe the big tank refuelers lined up at the loading site. She counted five KC-10 Extenders, freshly surplussed by the Air Force. Their big bellies reminded her of pregnant whales.

From their vantage point, they could see down to the temporarily expanded checkpoint, set up just outside the base. As foreseen, security was stringently tight this near the airfield—all drivers and passengers had to get out, be scanned, IDs checked against global records, briefcases and purses searched. K-9 units inspected car interiors and trunks. Explosives-detecting robots rolled under the vehicles.

She fished out binoculars and focused on the people waiting to be cleared. Some carried laptops and backpacks and she guessed they were the scientists flying with the dispersal teams. Their body language was clear. Even this early, they were jazzed, eager to go, excited as kids on a field trip. One of the pilots had mentioned there would be some sort of preflight ceremony, honoring the teams that had put all this together. The flight crews were studiedly nonchalant—this was an important, high-profile job, sure, but they couldn’t let their cool down in front of so many science nerds. She couldn’t see well enough to pick out Ted, or the friendly woman from the bar.

In a special treaty deal with the Arctic Council, they would fly from Elmendorf and arc over the North Pole, spreading hydrogen sulfide in their wakes. The tiny molecules of it would mate with water vapor in the stratospheric air, making sulfurics. Those larger, wobbly molecules reflected sunlight well—a fact learned from studying volcano eruptions back in the TwenCen. Spray megatons of hydrogen sulfide into the stratosphere, let water turn it into a sunlight-bouncing sheet—SkyShield—and they could cool the entire Arctic.

Or so the theory went. The Arctic Council had agreed to this series of large-scale experiments, run by the USA since they had the in-flight refuelers that could spread the tiny molecules to form the SkyShield. Small-scale experiments—opposed, of course, by many enviros—had seemed to work. Now came the big push, trying to reverse the retreat of sea ice and warming of the tundra.

Anchorage lay slightly farther north than Oslo, Helsinki, and Stockholm, but not as far north as Reykjavik or Murmansk. Flights from Anchorage to Murmansk would let them refuel and reload hydrogen sulfide at each end, then follow their paths back over the pole. Deploying hydrogen sulfide along their flight paths at 45,000 feet, they would spread a protective layer to reflect summer sunlight. In a few months, the sulfuric droplets would ease down into the lower atmosphere, mix with moist clouds, and come down as rain or snow, a minute, undetectable addition to the acidity already added by industrial pollutants. Experiment over.

The total mass delivered was far less than that from volcanoes like Pinatubo, which had cooled the whole planet in 1991–92. But volcanoes do messy work, belching most of their vomit into the lower atmosphere. This was to be a designer volcano, a thin skin of aerosols skating high across the stratosphere.

It might stop the loss of the remaining sea ice, the habitat of the polar bear. Only 10% of the vast original cooling sheets remained. Equally disruptive changes were beginning to occur in other parts of the world.

But geoengineered tinkerings would also be a further excuse to delay cutbacks in carbon dioxide emissions. People loved convenience, their air conditioning and winter heating and big lumbering SUVs. Humanity had already driven the air’s CO2 content to twice what it was before 1800, and with every developing country burning oil and coal as fast as they could extract them, only dire emergency could drive them to abstain. To do what was right.

The greatest threat to humanity arose not from terror, but error. Time to take the gloves off.

She put the binocs away and headed north. The city’s seacoast was mostly rimmed by treacherous mudflats, even after the sea kept rising. Still, there were coves and sandbars of great beauty. Elinor drove off Glenn Highway to the west, onto progressively smaller, rougher roads, working their way backcountry by Bureau of Land Management roads to a sagging, long-unused access gate for loggers. Bolt cutters made quick work of the lock securing its rusty chain closure. After she pulled through, Gene carefully replaced the chain and linked it with an equally rusty padlock, brought for this purpose. Not even a thorough check would show it had been opened, till the next time BLM tried to unlock it. They were now on Elmendorf, miles north of the airfield, far from the main base’s bustle and security precautions. Thousands of acres of mudflats, woods, lakes, and inlet shoreline lay almost untouched, used for military exercises and not much else. Nobody came here except for infrequent hardy bands of off-duty soldiers or pilots, hiking with maps red-marked UXO for “Unexploded Ordnance.” Lost live explosives, remnants of past field maneuvers, tended to discourage casual sightseers and trespassers, and the Inuit villagers wouldn’t be berry-picking till July and August. She consulted her satellite map, then took them on a side road, running up the coast. They passed above a cove of dark blue waters.

Beauty. Pure and serene.

The sea-level rise had inundated many of the mudflats and islands, but a small rocky platform lay near shore, thick with trees. Driving by, she spotted a bald eagle perched at the top of a towering spruce tree. She had started birdwatching as a Girl Scout and they had time; she stopped.

She left the men in the Ford and took out her long-range binocs. The eagle was grooming its feathers and eyeing the fish rippling the waters offshore. Gulls wheeled and squawked, and she could see sea lions knifing through fleeing shoals of herring, transient dark islands breaking the sheen of waves. Crows joined in onshore, hopping on the rocks and pecking at the predators’ leftovers.

She inhaled the vibrant scent of ripe wet salty air, alive with what she had always loved more than any mere human. This might be the last time she would see such abundant, glowing life, and she sucked it in, trying to lodge it in her heart for times to come.

She was something of an eagle herself, she saw now, as she stood looking at the elegant predator. She kept to herself, loved the vibrant natural world around her, and lived by making others pay the price of their own foolishness. An eagle caught hapless fish. She struck down those who would do evil to the real world, the natural one.

Beyond politics and ideals, this was her reality.

Then she remembered what else she had stopped for. She took out her cell phone and pinged the alert number.

A buzz, then a blurred woman’s voice. “Able Baker.”

“Confirmed. Get a GPS fix on us now. We’ll be here, same spot, for pickup in two to three hours. Assume two hours.”

Buzz buzz. “Got you fixed. Timing’s okay. Need a Zodiac?”

“Yes, definite, and we’ll be moving fast.”

“You bet. Out.”

Back in the cab, Bruckner said, “What was that for?”

“Making the pickup contact. It’s solid.”

“Good. But I meant, what took so long.”

She eyed him levelly. “A moment spent with what we’re fighting for.”

Bruckner snorted. “Let’s get on with it.”

Elinor looked at Bruckner and wondered if he wanted to turn this into a spitting contest just before the shoot.

“Great place,” Gene said diplomatically.

That broke the tension and she started the Ford.

They rose further up the hills northeast of Anchorage, and at a small clearing, she pulled off to look over the landscape. To the east, mountains towered in lofty gray majesty, flanks thick with snow. They all got out and surveyed the terrain and sight angles toward Anchorage. The lowlands were already thick with summer grasses, and the winds sighed southward through the tall evergreens.

Gene said, “Boy, the warming’s brought a lot of growth.”

Elinor glanced at her watch and pointed. “The KCs will come from that direction, into the wind. Let’s set up on that hillside.”

They worked around to a heavily wooded hillside with a commanding view toward Elmendorf Air Force Base. “This looks good,” Bruckner said, and Elinor agreed.

“Damn—a bear!” Gene cried.

They looked down into a narrow canyon with tall spruce. A large brown bear was wandering along a stream about a hundred meters away.

Elinor saw Bruckner haul out a .45 automatic. He cocked it.

When she glanced back the bear was looking toward them. It turned and started up the hill with lumbering energy.

“Back to the car,” she said.

The bear broke into a lope.

Bruckner said, “Hell, I could just shoot it. This is a good place to see the takeoff and—”

“No. We move to the next hill.”

Bruckner said, “I want—”

“Go!”

They ran.

One hill farther south, Elinor braced herself against a tree for stability and scanned the Elmendorf landing strips. The image wobbled as the air warmed across hills and marshes.

Lots of activity. Three KC-10 Extenders ready to go. One tanker was lined up on the center lane and the other two were moving into position.

“Hurry!” she called to Gene, who was checking the final setup menu and settings on the Dart launcher.

He carefully inserted the missile itself in the launcher. He checked, nodded and lifted it to Bruckner. They fitted the shoulder straps to Bruckner, secured it, and Gene turned on the full arming function. “Set!” he called.

Elinor saw a slight stirring of the center Extender and it began to accelerate. She checked: right on time, 0900 hours. Hard-core military like Bruckner, who had been a Marine in the Middle East, called Air Force the “saluting Civil Service,” but they did hit their markers. The Extenders were not military now, just surplus, but flying giant tanks of sloshing liquid around the stratosphere demands tight standards.

“I make the range maybe 20 kilometers,” she said. “Let it pass over us, hit it close as it goes away.”

Bruckner grunted, hefted the launcher. Gene helped him hold it steady, taking some of the weight. Loaded, it weighed nearly 50 pounds. The Extender lifted off, with a hollow, distant roar that reached them a few seconds later, and Elinor could see that media coverage was high. Two choppers paralleled the takeoff for footage, then got left behind.

The Extender was a full-extension DC-10 airframe and it came nearly straight toward them, growling through the chilly air. She wondered if the chatty guy from the bar, Ted, was one of the pilots. Certainly, on a maiden flight the scientists who ran this experiment would be on board, monitoring performance. Very well.

“Let it get past us,” she called to Bruckner.

He took his head from the eyepiece to look at her. “Huh? Why—”

“Do it. I’ll call the shot.”

“But I’m—”

“Do it.”

The airplane was rising slowly and flew by them a few kilometers away.

“Hold, hold…” she called. “Fire.”

Bruckner squeezed the trigger and the missile popped out—whuff!—seemed to pause, then lit. It roared away, startling in its speed—straight for the exhausts of the engines, then correcting its vectors, turning, and rushing for the main body. Darting.

It hit with a flash and the blast came rolling over them. A plume erupted from the airplane, dirty black.

“Bruckner! Resight—the second plane is taking off.”

She pointed. Gene chunked the second missile into the Dart tube. Bruckner swiveled with Gene’s help. The second Extender was moving much too fast, and far too heavy, to abort takeoff.

The first airplane was coming apart, rupturing. A dark cloud belched across the sky.

Elinor said clearly, calmly, “The Dart’s got a max range about right so… shoot.”

Bruckner let fly and the Dart rushed off into the sky, turned slightly as it sighted, accelerated like an angry hornet. They could hardly follow it. The sky was full of noise.

“Drop the launcher!” she cried.

“What?” Bruckner said, eyes on the sky.

She yanked it off him. He backed away and she opened the gas can as the men watched the Dart zooming toward the airplane. She did not watch the sky as she doused the launcher and splashed gas on the surrounding brush.

“Got that lighter?” she asked Bruckner.

He could not take his eyes off the sky. She reached into his right pocket and took out the lighter. Shooters had to watch, she knew.

She lit the gasoline and it went up with a whump.

“Hey! Let’s go!” She dragged the men toward the car.

They saw the second hit as they ran for the Ford. The sound got buried in the thunder that rolled over them as the first Extender hit the ground kilometers away, across the inlet. The hard clap shook the air, made Gene trip, then stagger forward.

She started the Ford and turned away from the thick column of smoke rising from the launcher. It might erase any fingerprints or DNA they’d left, but it had another purpose too.

She took the run back toward the coast at top speed. The men were excited, already reliving the experience, full of words. She said nothing, focused on the road that led them down to the shore. To the north, a spreading dark pall showed where the first plane went down.

One glance back at the hill told her the gasoline had served as a lure. A chopper was hammering toward the column of oily smoke, buying them some time.

The men were hooting with joy, telling each other how great it had been. She said nothing.

She was happy in a jangling way. Glad she’d gotten through without the friction with Bruckner coming to a point, too. Once she’d been dropped off, well up the inlet, she would hike around a bit, spend some time birdwatching, exchange horrified words with anyone she met about that awful plane crash—No, I didn’t actually see it, did you?—and work her way back to the freighter, slipping by Elmendorf in the chaos that would be at crescendo by then. Get some sleep, if she could.

They stopped above the inlet, leaving the Ford parked under the thickest cover they could find. She looked for the eagle, but didn’t see it. Frightened skyward by the bewildering explosions and noises, no doubt. They ran down the incline. She thumbed on her comm, got a crackle of talk, handed it to Bruckner. He barked their code phrase, got confirmation.

A Zodiac was cutting a V of white, homing in on the shore. The air rumbled with the distant beat of choppers and jets, the search still concentrated around the airfield. She sniffed the rotten egg smell, already here from the first Extender. It would kill everything near the crash, but this far off should be safe, she thought, unless the wind shifted. The second Extender had gone down closer to Anchorage, so it would be worse there. She put that out of her mind.

Elinor and the men hurried down toward the shore to meet the Zodiac. Bruckner and Gene emerged ahead of her as they pushed through a stand of evergreens, running hard. If they got out to the pickup craft, then suitably disguised among the fishing boats, they might well get away.

But on the path down, a stocky Inuit man stood. Elinor stopped, dodged behind a tree.

Ahead of her, Bruckner shouted, “Out of the way!”

The man stepped forward, raised a shotgun. She saw something compressed and dark in his face.

“You shot down the planes?” he demanded.

A tall Inuit racing in from the side shouted, “I saw their car comin’ from up there!”

Bruckner slammed to a stop, reached down for his .45 automatic—and froze. The double-barreled shotgun could not miss at that range.

It had happened so fast. She shook her head, stepped quietly away. Her pulse hammered as she started working her way back to the Ford, slipping among the trees. The soft loam kept her footsteps silent.

A third man came out of the trees ahead of her. She recognized him as the young Inuit father from the diner, and he cradled a black hunting rifle. “Stop!”

She stood still, lifted her binocs. “I’m bird watching, what—”

“I saw you drive up with them.”

A deep, brooding voice behind her said, “Those planes were going to stop the warming, save our land, save our people.”

She turned to see another man pointing a large caliber rifle. “I, I, the only true way to do that is by stopping the oil companies, the corporations, the burning of fossil—”

The shotgun man, eyes burning beneath heavy brows, barked, “What’ll we do with ‘em?”

She talked fast, hands up, open palms toward him. “All that SkyShield nonsense won’t stop the oceans from turning acid. Only fossil—”

“Do what you can, when you can. We learn that up here.” This came from the tall man. The Inuit all had their guns trained on them now. The tall man gestured with his and they started herding the three of them into a bunch. The men’s faces twitched, fingers trembled.

The man with the shotgun and the man with the rifle exchanged nods, quick words in a complex, guttural language she could not understand. The rifleman seemed to dissolve into the brush, steps fast and flowing, as he headed at a crouching dead run down to the shoreline and the waiting Zodiac.

She sucked in the clean sea air and could not think at all. These men wanted to shoot all three of them and so she looked up into the sky to not see it coming. High up in a pine tree with a snapped top an eagle flapped down to perch. She wondered if this was the one she had seen before.

The oldest of the men said, “We can’t kill them. Let ‘em rot in prison.”

The eagle settled in. Its sharp eyes gazed down at her and she knew this was the last time she would ever see one. No eagle would ever live in a gray box. But she would. And never see the sky.


Is U.S. Science in Decline?

Is U.S. Science in Decline?

YU XIE

Is U.S. Science in Decline?

The nation’s position relative to other countries is changing, but this need not be reason for alarm.

Who are the most important U.S. scientists today?” Our host posed the question to his guests at a dinner that I attended in 2003. Americans like to talk about politicians, entertainers, athletes, writers, and entrepreneurs, but rarely, if ever, scientists. Among a group of six academics from elite U.S. universities at the dinner, no one could name a single outstanding contemporary U.S. scientist.

This was not always so. For much of the 20th century, Albert Einstein was a household-name celebrity in the United States, and every academic was familiar with names such as James Watson, Enrico Fermi, and Edwin Hubble. Today, however, Americans’ interest in pure science, unlike their interest in new “apps,” seems to have waned. Have the nation’s scientific achievements and strengths also lessened? Indeed, scholars and politicians alike have begun to worry that U.S. science may be in decline.

If the United States loses its dominance in science, historians of science would be the last group to be surprised. Historically, the world center of science has shifted several times, from Renaissance Italy to England in the 17th century, to France in the 18th century, and to Germany in the 19th century, before crossing the Atlantic in the early 20th century to the United States. After examining the cyclical patterns of science centers in the world with historical data, Japanese historian of science Mitsutomo Yuasa boldly predicted in1962 that “the scientific prosperity of [the] U.S.A., begun in 1920, will end in 2000.”

Needless to say, Yuasa’s prediction was wrong. By all measures, including funding, total scientific output, highly influential scientific papers, and Nobel Prize winners, U.S. leadership in science remains unparalleled today. Containing only 5% of the world’s total population, the United States can consistently claim responsibility for one- to two-thirds of the world’s scientific activities and accomplishments. Present-day U.S. science is not a simple continuation of science as it was practiced earlier in Europe. Rather, it has several distinctive new characteristics: It employs a very large labor force; it requires a great deal of funding from both government and industry; and it resembles other professions such as medicine and law in requiring systematic training for entry and compensating for services with financial, as well as nonfinancial, rewards. All of these characteristics of modern science are the result of dramatic and integral developments in science, technology, industry, and education in the United States over the course of the 20th century. In the 21st century, however, a debate has emerged concerning U.S. ability to maintain its world leadership in the future.

The U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force.

The debate involves two opposing views. The first view is that U.S. science, having fallen victim to a new, highly competitive, globalized world order, particularly to the rise of China, India, and other Asian countries, is now declining. Proponents of this alarmist view call for significantly more government investment in science, as stated in two reports issued by the National Academy of Sciences (NAS), the National Academy of Engineering, and the Institute of Medicine: Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future in 2007, and Rising Above the Gathering Storm: Rapidly Approaching Category 5 in 2010.

The second view is that if U.S. science is in trouble, this is because there are too many scientists, not too few. Newly trained scientists have glutted the scientific labor market and contribute low-cost labor to organized science but are unable to become independent and, thus, highly innovative. Proponents of the second view, mostly economists, are quick to point out that claims concerning a shortage of scientific personnel are often made by interest groups—universities, senior scientists, funding agencies, and industries that employ scientifically trained workers—that would benefit from an increased supply of scientists. This view is well articulated in two reports issued by the RAND Corporation in 2007 and 2008 in response to the first NAS report and economist Paula Stephan’s recent book, How Economics Shapes Science.

What do data reveal?

Which view is correct? In a 2012 book I coauthored with Alexandra Killewald, Is American Science in Decline?, we addressed this question empirically, drawing on as much available data as we could find covering the past six decades. After analyzing 18 large, nationally representative data sets, in addition to a wealth of published and Web-based materials, we concluded that neither view is wholly correct, though both have some merit.

Between the 1960s and the present, U.S. science has fared reasonably well on most indicators that we can construct. The following is a summary of the main findings reported in our book.

First, the U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force. Census data show that the scientific labor force has increased steadily since the 1960s. In 1960, science and engineering constituted 1.3% of the total labor force of about 66 million. By 2007, it was 3.3% of a much larger labor force of about 146 million. Of course, between 1960 and 2007, the share of immigrants among scientists increased, at a time when all Americans were becoming better educated. As a result, the percentage of scientists among native-born Americans with at least a college degree has declined over time. However, diversity has increased as women and non-Asian minorities have increased their representation among U.S. scientists.

Second, despite perennial concerns about the performance of today’s students in mathematics and science, today’s U.S. schoolchildren are performing in these areas as well as or better than students in the 1970s. At the postsecondary level, there is no evidence of a decline in the share of graduates receiving degrees in scientific fields. U.S. universities continue to graduate large numbers of young adults well trained in science, and the majority of science graduates do find science-related employment. At the graduate level, the share of foreign students among recipients of science degrees has increased over time. More native-born women now receive science degrees than before, although native-born men have made no gains. Taken together, education data suggest that Americans are doing well, or at least no worse than in the past, at obtaining quality science education and completing science degrees.

Finally, we used a large number of indicators to track changes in society’s general attitudes toward science, including confidence in science, whether to fund basic science, scientists’ prestige, and freshmen interest in science research careers. Those indicators all show that the U.S. public has remained overwhelmingly positive toward scientists and science in general. About 80% of Americans endorse federal funding for scientific research, even if it has no immediate benefits, and about 70% believe that the benefits of science outweigh the costs. These numbers have stayed largely unchanged over recent decades. Americans routinely express greater confidence in the leadership of the scientific community than that of Congress, organized religion, or the press.

Is it possible that Americans support science even though they themselves have no interest in it? To measure Americans’ interest in science, we also analyzed all cover articles published in Newsweek magazine and all books on the New York Times Best Sellers List from 1950 to 2007. From these data, we again observe an overall upward trend in Americans’ interest in science.

Sources of anxiety

What, then, are the sources of anxiety about U.S. science? In Is American Science in Decline?, we identify three of them, two historical and one comparative. First, our analysis of earnings using data from the U.S. decennial censuses revealed that scientists’ earnings have grown very slowly, falling further behind those of other high-status professionals such as doctors and lawyers. This unfavorable trend is particularly pronounced for scientists at the doctoral level.

Second, scientists who seek academic appointments now face greater challenges. Tenure-track positions are in short supply relative to the number of new scientists with doctoral training seeking such positions. As a result, more and more young scientists are now forced to take temporary postdoctoral appointments before finding permanent jobs. Job prospects are particularly poor in biomedical science, which has been well supported by federal funding through the National Institutes of Health. The problem is that the increased spending is mainly in the form of research grants that enhance research labs’ ability to hire temporary research staff, whereas universities are reluctant to expand permanent faculty positions. Some new Ph.D.s in biomedical fields need to take on two or more postdoctoral or temporary positions before having a chance to find a permanent position. It is the poor job outlook for these new Ph.D.s and their relatively low earnings that has led some economists to argue that there is a glut of scientists in the United States.

Third, of course, the greatest source of anxiety concerning U.S. science has been the globalization of science, resulting in greater competition from other countries. Annual news releases reveal the mediocre performance of U.S. schoolchildren on international tests of math and science. The growth of U.S. production of scientific articles has slowed down considerably over the past several decades as compared with that in other areas, particularly East Asia. As a result, the share of world science contributed by the United States is dwindling.

But in some ways, the globalization of science is a result of U.S. science’s success. Science is a public good, and a global one at that. Once discovered, science knowledge is codified and then can be taught and consumed anywhere in the world. The huge success of U.S. science in the 20th century meant that scientists in many less developed countries, such as China and India, could easily build on the existing science foundation largely built by U.S. scientists and make new scientific discoveries. Internet communication and cheap air transportation have also minimized the importance of location, enabling scientists in less developed countries to have access to knowledge, equipment, materials, and collaborators in more developed countries such as the United States.

The globalization of science has also made its presence felt within U.S. borders. More than 25% of practicing U.S. scientists are immigrants, up from 7% in 1960. Almost half of students receiving doctoral degrees in science from U.S. universities are temporary residents. The rising share of immigrants among practicing scientists and engineers indicates that U.S. dependence on foreign-born and foreign-trained scientists has dramatically increased. Although most foreign recipients of science degrees from U.S. universities today prefer to stay in the United States, for both economic and scientific reasons, there is no guarantee that this will last. If the flow of foreign students to U.S. science programs should stop or dramatically decline, or if most foreign students who graduate with U.S. degrees in science should return to their home countries, this could create a shortage of U.S. scientists, potentially affecting the U.S. economy or even national security.

What’s happening in China?

Although international competition doesn’t usually refer to any specific country in discussions of science policy, today’s discourse does tend to refer, albeit implicitly, to a single country: China. In 2009, national headlines revealed that students in Shanghai outscored their peers around the world in math, science, and reading on the Program for International Student Assessment (PISA), a test administered to 15-year-olds in 65 countries. In contrast, the scores of U.S. students were mediocre. Although U.S. students had performed similarly on these comparative tests for a long time, the 2009 PISA results had an unusual effect in sparking a national discussion of the proposition that the United States may soon fall behind China and other countries in science and technology. Secretary of Education Arne Duncan referred to the results as “a wake-up call.”

China is the world’s largest country, with a population of 1.3 billion, and grew economically at an annualized growth rate of 7.7% between 1978 and 2010. Other indicators also suggest that China has been developing its science and technology with the intention of narrowing the gap between itself and the United States. Activities in China indicate its inevitable rise as a powerhouse in science and technology, and it is important to understand what this means for U.S. science.

The Chinese government has spent large sums of money trying to upgrade Chinese science education and improve China’s scientific capability. It more than doubled the number of higher education institutions from 1,022 in 1998 to 2,263 in 2008 and upgraded about 100 elite universities with generous government funding. China’s R&D expenditure has been growing at 20% per year, benefitting both from the increase in gross domestic product (GDP) and the increase in the share of GDP spent on R&D. In addition, the government has devised various attractive programs, such as the Changjiang Scholars Program and the Thousand Talent Program, to lure expatriate Chinese-born scientists, particularly those working in the United States, back to work in China on a permanent or temporary basis.

The government’s efforts to improve science education seem to have paid off. China is now by far the world’s leader in bachelor’s degrees in science and engineering, with 1.1 million in 2010, more than four times the U.S. number. This large disparity reflects not only China’s dramatic expansion in higher education since 1999 but also the fact that a much higher percentage of Chinese students major in science and engineering, around 44% in 2010, compared to 16% in the United States. Of course, China’s population is much larger. Adjusting for population size differences, the two countries have similar proportions of young people with science and engineering bachelor’s degrees. China’s growth in the production of science and engineering doctoral degrees has been comparably dramatic, from only 10% of the U.S. total in 1993 to a level exceeding that in the United States by 18% in 2010. Of course, questions have been raised both in China and abroad about whether the quality of a Chinese doctoral degree is equivalent to that of a U.S. degree.

The impact of China’s heavy investment in scientific research is also unmistakable. Data from Thomson Reuters’ InCites and Essential Science Indicators databases indicate that China’s production of scientific articles grew at an annual rate of 15.4% between 1990 and 2011. In terms of total output, China overtook the United Kingdom in 2004, and Japan and Germany in 2005, and has since remained second only to the United States. The data also reveal that the quality of papers produced by Chinese scientists, measured by citations, has increased rapidly. China’s production of highly cited articles achieved parity with Germany and the United Kingdom around 2009 and reached a level of 31% the U.S. rate in 2011.

Four factors favor China’s rise in science: a large population and human capital base, a large diaspora of Chinese-origin scientists, a culture of academic meritocracy, and a centralized government willing to invest in science. However, China’s rise in science also faces two major challenges: a rigid, top-down administration system known for misallocating resources, and rising allegations of scientific misconduct in a system where major decisions about funding and rewards are made by bureaucrats rather than peer scientists. Given these features, Chinese science is likely to do well in research areas where research output depends on material and human resources; i.e., extensions of proven research lines rather than truly innovative advances into unchartered territories. Given China’s heavy emphasis on its economic development, priority is also placed on applied rather than basic research. These characteristics of Chinese science mean that U.S. scientists could benefit from collaborating with Chinese scientists in complementary and mutually beneficial ways. For example, U.S. scientists could design studies to be tested in well-equipped and well-staffed laboratories in China.

Science in a new world order

Science is now entering a new world order and may have changed forever. In this new world order, U.S. science will remain a leader but not in the unchallenged position of dominance it has held in the past. In the future, there will no longer be one major world center of science but multiple centers. As more scientists in countries such as China and India actively participate in research, the world of science is becoming globalized as a single world community.

A more competitive environment on the international scene today does not necessarily mean that U.S. science is in decline. Just because science is getting better in other countries, this does not mean that it’s getting worse in the United States. One can imagine U.S. science as a racecar driver, leading the pack and for the most part maintaining speed, but anxiously checking the rearview mirror as other cars gain in the background, terrified of being overtaken. Science, however, is not an auto race with a clear finish line, nor does it have only one winner. On the contrary, science has a long history as the collective enterprise of the entire human race. In most areas, scientists around the world have learned from U.S. scientists and vice versa. In some ways, U.S. science may have been too successful for its own good, as its advancements have improved the lives of people in other nations, some of which have become competitors for scientific dominance.

Hence, globalization is not necessarily a threat to the wellbeing of the United States or its scientists. As more individuals and countries participate in science, the scale of scientific work increases, leading to possibilities for accelerated advancements. World science may also benefit from fruitful collaborations of scientists in different environments and with different perspectives and areas of expertise. In today’s ever more competitive globalized science, the United States enjoys the particular advantage of having a social environment that encourages innovation, values contributions to the public good, and lives up to the ideal of equal opportunity for all. This is where the true U.S. advantage lies in the long run. This is also the reason why we should remain optimistic about U.S. science in the future.

Recommended reading

J. M. Diamond, Guns, Germs and Steel: The Fates of Human Societies (New York: W.W. Norton & Company, 1999).

Thomas Friedman, The World Is Flat: A Brief History of the Twenty-First Century (New York: Farrar, Straus, and Giroux, 2005).

Titus Galama and James R. Hosek, eds., Perspectives on U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2007).

Titus Galama and James R. Hosek, eds., U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2008).

Claudia Dale Goldin and Lawrence F. Katz, The Race between Education and Technology (Cambridge, MA: Belknap Press of Harvard University Press, 2008).

Alexandra Killewald and Yu Xie, “American Science Education in its Global and Historical Contexts,” Bridge (Spring 2013): 15-23.

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future (Washington, DC: National Academies Press, 2007).

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Rapidly Approaching Category 5 (Washington, DC: National Academies Press, 2010).

Organisation for Economic Co-operation and Development, PISA 2009 Results: Executive Summary (2010); available online at www.oecd.org/pisa/pisaproducts/46619703.pdf.

Paula Stephan, How Economics Shapes Science (Cambridge, MA: Harvard University Press, 2012).

Yu Xie and Alexandra A. Killewald, Is American Science in Decline? (Cambridge, MA: Harvard University Press, 2012).


Yu Xie is Otis Dudley Duncan Distinguished University Professor of Sociology, Statistics, and Public Policy at the University of Michigan. This article is adapted from the 2013 Henry and Bryna David Lecture, which he presented at the National Academy of Sciences.


Conservatism and Climate Science

Conservatism and Climate Science

STEVEN F. HAYWARD

Conservatism and Climate Science

Objections to liberal environmental orthodoxy have less to do with the specifics of the research or the economic interests of the fossil fuel industry than with fundamental questions about hubris and democratic values.

It is not news to say that climate change has become the most protracted science and policy controversy of all time. If one dates the beginning of climate change as a top tier public issue from the Congressional hearings and media attention during the summer of 1988, shortly after which the UN Framework Convention on Climate Change was set in motion with virtually unanimous international participation, it is hard to think of another policy issue that has gone on for a generation with the arguments—and the policy strategy—essentially unchanged as if stuck in a Groundhog Day loop, and with so little progress being made relative to the goals and scale of the problem as set out. Even other areas of persistent scientific and policy controversy—such as chemical risk and genetically modified organisms—generally show some movement toward consensus or policy equilibrium out of which progress is made.

There has always been ideological and interest group division about environmental issues, but the issue of climate change has become a matter of straight partisan division, with Republicans now almost unanimously hostile to the climate science community and opposed to all proposed greenhouse gas emissions regulation. Beyond climate, Republicans have become almost wholly disengaged from the entire domain of environmental issues.

This represents a new situation. Even amidst contentious arguments in the past, major environmental legislation such as the Clean Air Act of 1990 passed with ample bipartisan majorities. Not only did the first Bush Administration engage the issue of climate change in a serious way, as recently as a decade ago leading Republicans, including two who became presidential nominees, were proposing active climate policies of various kinds (John McCain in the Senate, and Gov. Mitt Romney in Massachusetts).

It is tempting to view this divide as another casualty of the deepening partisanship occurring almost across the board in recent years, which has seen formerly routine compromises over passing budgets become fights to the death. This kind of partisan polarization is fatal to policy change in almost every area, as the protracted fight over the Affordable Care Act shows.

Yet the increasing partisan divide about nearly everything should prompt more skepticism about a popular narrative said to explain conservative resistance to engaging climate change: that conservatives—or at least the Republican political class—have become “anti-science.” As a popular book title has it, there is a Republican “war” on science, but science has little to do with the partisan divisions over issues such as health care reform, education policy, labor rules, or tax rates. And if one wants to make the politicization of science primarily a matter of partisan calculation, a full balance sheet shows numerous instances of liberals—and Democratic administrations—disregarding solid scientific findings that contradict their policy preferences, or cutting funding for certain kinds of scientific research. Examples include the way many prominent liberals exhibit blanket opposition to genetically modified organisms, some childhood vaccines, or, to pick a narrow case, how the U.S. Fish and Wildlife Service has ignored recommendations of its own science advisory board on endangered species controversies. A closer look at what drives liberal attitudes about some of these controversies will find that their reasons are similar or identical to the reasons conservatives are critical of policy-relevant science in climate and other domains—neither side is very compelled by science that contradicts strongly held views about how politics and policies ought to be carried out. In other words, the ideological argument over science today merely replicates many of the other arguments between left and right today based on long-standing philosophical premises or principles.

Drawing back to a longer time horizon, one discovers the counter-narrative reality that government funding for science research often grew faster under Republican than Democratic administrations. Ronald Reagan, for example, supported the large appropriation for the super-conducting supercollider; Bill Clinton cancelled the project for fiscal reasons. George W. Bush committed the U.S. to joining the international ITER consortium to pursue fusion energy, but the new Democratic Congress of 2007-2008 refused to appropriate the U.S. pledge.

President Obama lent some credence to the popular narrative with the brief line in his first inaugural address that “We will restore science to its rightful place.” Rather than write off this comment as a partisan shot at the outgoing Bush administration, we should take up the implicit challenge of thinking anew about what is the “rightful” place of science in a democracy. So let me step back from climate for a moment to consider some of the serious reservations or criticisms conservatives have about science generally, and especially science combined with political power. My aim here is both to help provide a fresh understanding of the sources of the current impasse, and to suggest how the outline of a conservative climate policy might come into view—albeit a policy framework that would be unacceptably weak to the environmental establishment.

Modern science and its discontents

The conservative ambivalence or hostility toward the intersection of science and policy can be broken down into three interconnected parts: theoretical, practical, and political. I begin by taking a brief tour through these three dimensions, for they help explain why appeals to scientific authority or “consensus” are guaranteed to be effective means of alienating conservatives and spurring their opposition to most climate initiatives. At the root of many controversies today, going far beyond climate change, are starkly different perspectives between left and right about the nature and meaning of reason and the place of science.

From the earliest days of the scientific revolution dating back to the Enlightenment, conservatives (and many liberals, too) were skeptical of the claims of science to superior authority based on cracking the code of complete objectivity. Keep in mind that prior to the modern scientific revolution, “science” comprised both material and immaterial aspects of reality, which is why “natural philosophy” and “moral science” were regarded as equivalent branches of human knowledge. The special, or as we might nowadays say the “privileged” dignity of the physical or natural sciences, the view that only scientific knowledge is real knowledge, was unknown. Today science is the most powerful idea in modern life, and it does not easily accommodate or respect “nonscientific” perspectives. This collective confidence can be observed most starkly in the benign condescension with which the “hard” sciences regard social science and the humanities in most universities (and the almost pathetic fervor with which some social science fields seek to show that they really are as quantitative and thus inaccessible to non-expert understanding as physics).

Even if the once grand ambition of working out a theory of complete causation for everything is no longer seriously maintained by most scientists, the original claim of scientific pre-eminence, best expressed in Francis Bacon’s famous phrase about the use of science “for the relief of man’s estate”—that is, for the exercise of control over nature—remains firmly planted. And even if we doubt that scientific completeness can ever be achieved in the real world, the residual confidence in the scientific command and control of the behavior of matter nonetheless implies that the command and control of human behavior is the legitimate domain of science.

The scientific problem deepened with the rise of social science in the 19th century, and especially the idea that what is real in the world can be cleanly separated from our beliefs about how the world should be—the infamous fact-value distinction. The conservative objection to the fact-value distinction was based not merely on the depreciation of moral argument, but more on the implied insistence that the freedom of the human mind was a primitive idea to be overcome by science. B.F. Skinner’s crude behaviorism of 50 years ago has seen the beginnings of a revival in the current interest in neuroscience (and behavioral economics), which may also portend a revival of a much more sophisticated updating of the Skinnerian vision of therapeutic government. If we really do succeed in unlocking as never before the secrets of how brain activity influences behavior, moral sentiments, and even cognition itself, will the call for active modification against “anti-social” behavior be far behind?

There might ironically be surprising agreement between environmentalists and conservatives over geoengineering, albeit for opposite reasons.

But even well short of that old prospect, one of the most basic problems of social science, from a conservative point of view (though many liberals will acknowledge this point) is that despite its claims to scientific objectivity, it cannot escape a priori “value judgments” about what questions and desired outcomes are the most salient. This turns out to be the Achilles heel of all social science, which tries to conduct itself with the same confidence and sophistication as the physical sciences, but which in the end cannot escape the fact that its enterprise is indeed “social.” We can really see this social dimension at work in the “climate enterprise”—my shorthand term for the two sides, science and policy, of the climate change problem. The climate enterprise is the largest crossroads of physical and social science ever contemplated.

The social science side of climate policy vividly displays the problem of fundamental disagreement over “normative” questions. Although we can apply rigorous economic analysis to energy forecasts and emission control pathways, the arguments over proper discount rates and the relative weight of the tradeoff between economic growth and emissions constraint cannot be resolved objectively, that is to say, scientifically. Climate action advocates are right to press the issue of intergenerational equity, but like “sustainability,” a working definition or meaningful framework for guiding policy is nearly impossible to settle. The ferocious conflicts over assessment of proposed climate policy should serve as a healthy reminder that while the traditional physical sciences can tell us what is, they cannot tell us what to do.

This is only one of the reasons why the descent from the theoretical to the practical level leads conservatives to have doubts about the reach and ambition of supposedly science-grounded policies in just about every area, let alone climate change. In environmental science and policy, environmentalists like to emphasize the interconnectedness of everything, the crude popular version of which is the “butterfly effect,” where a butterfly beating its wings in Asia results in a hurricane in the Gulf of Mexico. Conservatives don’t disagree with the interconnectedness of things. Quite the opposite; the interconnectedness of phenomena is in many ways a core conservative insight, as any reader of Edmund Burke will perceive. But drawing from Burke, conservatives doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power. In its highest and most serious form, this skepticism flows not from the style of monkey-trial ignorance or superstition associated with Inherit the Wind, but from the cognitive or epistemological limitations of human knowledge and action associated with philosophers like Friedrich Hayek and Karl Popper (among others), which tells us that knowledge is always partial and contingent and subject to correction, all the more so as we move from the particular and local to the general and global.

Thus, the basic practical defect of scientific administration is the “synoptic fallacy” that we can command enough information and make decisions about resources and social phenomena effectively enough to achieve our initial goals. Conservative skepticism is less about science per se than its claims to usefulness in the policy realm. This skepticism combines with the older liberal view—that is, the view that values individual freedoms above all else—that the concentration of discretionary political power required for nearly all schemes of comprehensive social or economic management are a priori suspect. Today that older liberal view is the core of political conservatism. Put more simply or directly, the conservative distrust of authority based on claims of superior scientific knowledge reflects a distrust of the motives of those who make such claims, and thus a mistrust of the validity of the claims themselves.

This practical policy difficulty might be overcome or compromised, as has happened occasionally in the past, if it weren’t for how the politics of science currently fall out today. In a sentence, the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction. Put more directly, it seems many scientists have chosen partisan sides. Some scientists are quite open about their leftward orientation. In 2004, Harvard geneticist Richard Lewontin wrote a shocking admission in the New York Review of Books: “Most scientists are, at a minimum, liberals, although it is by no means obvious why this should be so. Despite the fact that all of the molecular biologists of my acquaintance are shareholders in or advisers to biotechnology firms, the chief political controversy in the scientific community seems to be whether it is wise to vote for Ralph Nader this time.” (With political judgment this bad, is it any wonder there might be doubts about the policy prescriptions of scientists?) MIT’s Kerry Emanuel, a Republican, but as mainstream as they come in climate science (Al Gore referenced his work, and in one of his books Emanuel refers to Sen. James Inhofe as a “scientific illiterate” and climate skeptics as les refusards), offers this warning to his field: “Scientists are most effective when they provide sound, impartial advice, but their reputation for impartiality is severely compromised by the shocking lack of political diversity among American academics, who suffer from the kind of group-think that develops in cloistered cultures. Until this profound and well-documented intellectual homogeneity changes, scientists will be suspected of constituting a leftist think tank.”

This partisan tilt—real or exaggerated—among the scientific establishment aggravates a general problem that afflicts nearly all domains of policy these days, namely, the way in which policy is distorted by special interests and advocacy groups in the political process. Hence we end up with energy policies favoring politically connected insiders (such as federal loan guarantees for the now-bankrupt Solyndra solar technology company) or subsidizing technologies (currently wind, solar, and ethanol) that are radically defective or incommensurate with the scale of the climate problem they are intended to remedy. The loop-holes, exceptions, and massive sector subsidies (especially to coal) of the Waxman-Markey cap-and-trade bill of 2009 rendered the bill a farce even on its own modest terms and should have appalled liberals and environmentalists as much as conservatives.

Here the political naiveté of scientists does their cause a disservice with everyone; the energy policy of both political parties since the first energy shocks of the 1970s has been essentially a frivolous farce of special interest favoritism and wishful thinking, with little coherence and even less long-term care for the kind of genuine energy innovation necessary to address prospective climate change on the extreme range of the long-run projections.

Is “conservative climate policy” an oxymoron?

To be sure, few or no Republican office holders are able to articulate this outlook with deep intellectual coherence, but then neither are most liberals capable of expressing their zealous egalitarian sentiments with the rigor of, say, John Rawls’ Theory of Justice. And this should not excuse the near complete Republican negligence on the whole range of environmental issues. But even if social psychologist Jonathan Haidt is correct (and I think he is) that liberals and conservatives emotionally perceive and respond to issues from deep-seated instincts rather than carefully reasoned dialectics, the divisions among us are susceptible to some rational understanding. Can the fundamental differences be harmonized or compromised?

The first point to grasp is that conservatives—or least the currently dominant libertarian strain of the right—ironically have a more open-ended outlook toward the future than contemporary liberals. The point here is not to sneak in climate skepticism, but policy skepticism, as the future is certain to unfold in unforeseen ways, with seemingly spontaneous and disruptive changes occurring outside the view or prior command of our political class. One current example is the fracking revolution in natural gas, which is significantly responsible for U.S. per capita carbon dioxide emissions falling to their lowest level in nearly 20 years. No one, including the gas industry itself, foresaw this coming even as recently as a decade ago. (And if the political class in Washington had seen it coming, it likely would have tried to stop it; many environmentalists are deeply ambivalent about fracking at the moment.) And one key point is that the fracking revolution occurred overwhelmingly in the absence of any national policy prescription. The bad news, from a conventional environmental point of view, is the fracking revolution, now extending to oil, is just beginning. It has decades to run, in more and more places around the world. This means the age of oil and gas is a long way from being over, and this is going to be true even in a prospective regime of rising carbon taxes. (The story is likely to be much the same for coal.)

More broadly, however, it is not necessary to be any kind of climate skeptic to be highly critical of the narrow, dreamlike quality the entire issue took on from its earliest moments. Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution, to be attacked with the typical emissions control policies—sort of a global version of the Clean Air Act. Likewise the diplomatic framework, a cross between arms control, trade liberalization, and the successful Montreal Protocol, was poorly suited to climate change and destined the Kyoto Protocol model to certain failure from the outset. If one were of a paranoid or conspiratorial state of mind, you might almost wonder if the first Bush Administration committed the U.S. to this framework precisely as a way of assuring it would be self-defeating. (I doubt they were that clever or devious.) There have been a few lonely voices that have recognized these defects while still arguing in favor of action, such as Gwyn Prins of the London School of Economics and Steve Rayner of Oxford University. Two years before the failure of the 2009 Copenhagen talks, Prins and Rayner argued in Nature magazine that we should ditch the “top-down universalism” of the Kyoto approach in favor of a decentralized approach that resembles American federalism.

If ever there was an issue that required patient and fresh thinking, it was climate change 25 years ago. The modern world, especially those still billions of people striving to escape energy poverty, demands abundant amounts of cheap energy, and no amount of wishful thinking (or government subsidies or mandates) will change this. The right conceptual understanding of the problem is that we need large-scale low- and non-carbon energy sources that are cheaper than hydrocarbon energy. Unfortunately, no one knows how to do this. No one seems to know how to solve immigration, poor results from public education, or the problem of generating faster economic growth either, but we haven’t locked ourselves into a single policy framework that one must either be for or against in the same way that we have done for climate policy. Environmentalists and policy makers alike crave certainty about the policy results ahead of us, and an emphasis on innovation, even when stripped of the technological fetishes and wishful thinking that has plagued much of our energy R&D investments, cannot provide any degree of certainty about paths and rates of progress. But it was a fatally poor choice to emphasize, almost to the exclusion of any other frameworks, a policy framework based on making conventional hydrocarbon energy, upon which the world depends utterly for its well-being, more expensive and artificially scarce. This might make some emissions headway in rich industrial nations, although it hasn’t in most of them, but won’t get far in the poorer nations of the world. Subsidizing expensive renewable energy is a self-defeating mugs game, as many European nations are currently recognizing.

While we stumble along trying to find breakthrough energy technologies with a low likelihood of success in the near and intermediate term, a more primary conservative orientation comes into view. The best framework for addressing large-scale disruptions from any cause or combination of causes is building adaptive resiliency. Too often this concept gets reduced to the defeatist concept of building seawalls, moving north, and installing more air conditioners. But humankind faces disasters and chronic calamities of many kinds and causes; think of droughts, which through history have been a scourge of civilizations. Perhaps it is grandiose or simplistic to say that the whole human story is one of gradually increasing adaptive resiliency. On the other hand, what was the European exploration and settlement of North America but an exercise in adaptive resiliency? This opens into one of the chief conservative concerns over climate change and many other problems: the pessimism that becomes a self-fulfilling prophecy. As the British historian Thomas Macaulay wrote in 1830, “On what principle is it that, when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?” The 20th century saw global civilization overcome two near-apocalyptic wars and numerous murderous regimes (not all entirely overcome today), and endure 40 years of nuclear brinksmanship that threatened a nuclear holocaust in 30 minutes. To suggest human beings can’t cope with slow moving climate change is astonishingly pessimistic, and the relentless soundings of the apocalypse have done more to undermine public interest in the issue than the efforts of the skeptical community.

One caveat here is the specter of a sudden “tipping point” leading to a rapid shift in climate conditions, perhaps over a period of mere decades. To be sure, our capacity to respond to sudden tipping points is doubtful; consider the problematic reaction to the tipping point of September 11, 2001, or the geopolitical paroxysms induced by the tipping point reached in July 1945 in Alamogordo, New Mexico. The climate community would be correct to object that the open-ended and uncertain orientation I have sketched here would likely be adequate for preparing for such a sudden change—but then again neither was the Kyoto Protocol approach that they so avidly supported.

Right now the fallback position for a tipping point scenario is geoengineering, or solar radiation management. There might ironically be surprising agreement between environmentalists and conservatives over geoengineering, albeit for opposite reasons that illustrate the central division outlined above. Liberal environmentalists tend to dislike geoengineering proposals partly for plausible philosophical reasons—humans shouldn’t be experimenting with the globe’s atmospheric system any more than we already are—and partly because of their abiding dislike of hydrocarbon energy that geoengineering would further enable. Environmentalists have compared geoengineering to providing methadone to a heroin addict, though the “oil addiction” metaphor, popular with both political parties (former oil man George W. Bush used it) is truly risible. We are also addicted to food, and to having a roof over our heads. But conservatives tend to be skeptical or opposed to geoengineering for the epistemological reasons alluded to above: the uncertainties involved with the global-scale intervention are unlikely to be known adequately enough to assure a positive outcome. Geoengineering may yet emerge as a climate adaptation tool out of emergency necessity, but it will be over the strong misgivings of both left and right alike. This shared hesitation might ironically make it possible for research on geoengineering to proceed with a lower level of distrust.

President Obama’s recent call for a new billion-dollar climate change fund aimed at research on adaptation and resiliency appears in general terms close to what I hint at here. Whether a billion dollars is a suitable amount (rather, it seems the opening bid for any spending initiative in Washington these days) or whether the fund would be spent sensibly rather than politically is an important but second order question.

The final difference between liberals and conservatives over climate change that is essential to grasp is wholly political in the high and low sense of the term. Some prominent environmentalists, and fellow travelers like New York Times columnist Thomas Friedman, periodically express open admiration for authoritarian power to resolve climate change and other problems for which democratic governments are proving resistant precisely because of their responsiveness to public opinion—what used to be understood and celebrated as “consent of the governed.” A few environmental advocates have gone as far as to say that democracy itself should be sacrificed to the urgency of solving the climate crisis, apparently oblivious to the fact that appeals to necessity in the face of external threats have been the tyrant’s primary self-justification since the beginning of conscious human politics, and seldom ends well for the tyrant and the people alike. For example, Mayer Hillman, a senior fellow at Britain’s Policy Studies Institute and author of How We Can Save the Planet, told a reporter some time back that “When the chips are down I think democracy is a less important goal than is the protection of the planet from the death of life, the end of life on it. This [resource rationing] has got to be imposed on people whether they like it or not.” Similar sentiments are found in the book The Climate Change Challenge and the Failure of Democracy by Australians David Shearman and Joseph Wayne Smith. One of the authors (Shearman) argued that “Liberal democracy is sweet and addictive and indeed in the most extreme case, the USA, unbridled individual liberty overwhelms many of the collective needs of the citizens… There must be open minds to look critically at liberal democracy. Reform must involve the adoption of structures to act quickly regardless of some perceived liberties.”

I can think of no other species of argument more certain to provoke enthusiasm for Second Amendment rights than this. The unfortunate drift toward anti-democratic authoritarianism flows partly from frustration but also from the success the environmental community has enjoyed through litigation and a regulatory process that often skirts democratic accountability—sometimes with decent reason, sometimes not. But this kind of aggrandized hallucination of the virtues of power will prove debilitating as the scope and scale of an environmental problem like climate change enlarges. I can appreciate that many climate action advocates will find much of what I’ve said here to be inadequate, but above all liberals and environmentalists would do well to take on board the categorical imperative of climate policy from a conservative point of view, namely, that whatever policies are developed, they must be compatible with individual liberty and democratic institutions, and cannot rely on coercive or unaccountable bureaucratic administration.


Steven F. Hayward is the inaugural visiting scholar in conservative thought and policy at the University of Colorado at Boulder, and the Ronald Reagan Distinguished Visiting Professor at Pepperdine University’s Graduate School of Public Policy.


A Survival Plan for the Wild Cyborg

A Survival Plan for the Wild Cyborg

RINIE VAN EST

A Survival Plan for the Wild Cyborg

In order to stay human in the current intimate technological revolution, we must become high-tech people with quirky characters. Here are seven theses to nail to the door of our technological church.

Today, the most exciting discoveries and technological developments have to do with us, we humans. Technology settles itself rapidly around and within us; collects more and more data about us; and increasingly is able to simulate human appearances and behaviour. As our relationship with technology is becoming more and more intimate, we are becoming techno-people or cyborgs. On the one hand, intimate technology offers opportunities for personal development and more control over our lives. On the other hand, governments, businesses, and other citizens may also deploy intimate technologies in order to influence or even coerce us. To put this development on the public and political agenda, the Rathenau Instituut in the Netherlands has coined the term “intimate-technological revolution,” which is partly driven by smartphones, social media, sensor networks, robotics, virtual worlds, and big data analysis. We describe this revolution in our report Intimate Technology: The Battle for Our Body and Behavior.

The fact that our selves are becoming increasingly intertwined with technology is illustrated by the ever-shrinking computer: from desktop to laptop, then tablet to mobile phone, and soon to e-glasses, and possibly in the long term to contact lenses. This shift from the table to lap, from hand to nose and even eye, shows us how technology creeps into us. For the time being, the demarcation line is typically just on the outside, but a variety of implantable devices—for example, cochlear implants for the deaf and deep brain stimulation electrodes for treating Parkinson’s disease and severely depressed patients—are already positioned inside the body.

Through our smartphone, smart shoes, sports watches, and life-logging cameras, we constantly inform the outside world about ourselves: obviously where we are through global positioning systems, but also what we are thinking and doing through social media. Once considered entirely private, that information is now accessible for literally the whole world to know. To some extent we now maintain our most intimate relationships by digital means. Social media are enabling new forms of relationships, from long-term and stable to short and volatile. And then there are phone apps that help us try to achieve our good intentions, such as exercising more or eating fewer sweets. They behave like compassionate but strict coaches, monitoring our metabolism and massaging our psyche.

The convergence of nanotechnology, biotechnology, information technology, and cognitive science increasingly turns biology into technology, and technology into biology. The convergence takes on three concrete forms. First, we are more and more like machines, and can thus be taken apart for maintenance and repair work and can perhaps even be upgraded or otherwise improved. Second, our interactions with one another are changing, precisely because machines are increasingly nestling into our private and social lives. And third, machines are becoming more and more humanlike, or at least engineers do their best to build in human traits, so that these machines seem to be social and emotional, and perhaps even moral and loving.

Mechanistic views of nature and mind have existed for centuries, but only recently has technology actually gained much control over our bodies and minds.

This development raises several fundamental questions: How close and intimate can technology become? At what point is technology still nicely intimate, and when does it become intimidating? Where do we have to set boundaries?

Mechanistic views of nature and mind have existed for centuries, but only recently has technology actually gained much control over our bodies and minds. Our hips and knees are now replaceable parts. Deafness, balance disorders, depression, anxiety trauma, heart irregularities, and innumerable other maladies have become, through the use of implants and pills, machine maintenance and performance problems. And now we begin to move to never-before-seen performance levels such as the eyeborg, the implant used by the colorblind artist Neil Harbisson to transform every hue into audible sound, with the result that he now hears colors, even the infrared and ultraviolet normally invisible to us.

The idea of intimacy used to pertain to matters of our body and mind that we would share only with people who were close to us: our immediate family members and true friends. We shared our personal intimacies by talking face to face, later with remote communication by writing letters, and then by telephone. The increasing role of technology for broadcasting information destabilizes this traditional and simple definition of what is “intimate.” On the social network Lulu, female students share their experiences about their ex-boyfriends; the geo-social network Foursquare allows users to announce their exact location in real time to all of their friends.

Consumer apps are coming that will recognize faces, analyze emotions, and link this data to our LinkedIn and Facebook profiles. When wearing Google Glasses, we ourselves will be as transparent as glass, for other computer eyeglass wearers can see who we are, what we do, who our friends are, and how we feel. Information about other people will be omnipresent, and so will information about other items in our environment. In a certain radius around Starbucks outlets, computer eyeglass wearers will receive alerts about the specialty coffee of the week, or tea if that’s their preference.

In other intimate interactions, technology has a growing intensity. Equipment has been produced that can enable parents at home to “tele-hug” a premature newborn in a hospital incubator. One in 10 love relationships now starts through online dating, and for casual sexual encounters, there are Web sites such as Second Love. On the aggressive side of the intimate interaction spectrum are the military drones used by U.S. forces that, having identified and tracked you, can now kill you.

Applications and devices fulfil an increasing number of roles that had been traditionally reserved for human beings. E-coaches encourage us to do more exercise, to conserve energy, or not to be too aggressive while e-mailing. Marketing psychologists no longer directly observe how people respond to advertising, but rather use emotion-recognition software, which is less expensive and more accurate. My son does not only play soccer against his friends, but also against digital heroes like Messi and Van Persie. Digital characters can appear very humanlike. When you kill someone in the latest-generation first-person shooter games (where you view the onscreen action as if through the eyes of a person with a gun), the suffering of the avatar that you’ve just offed is so palpable that you can feel genuine remorse. Meanwhile, the international children’s aid organization Terre des Hommes put Webcam child sex tourism on the public and political agenda in many Western countries by using an online avatar named Sweetie, a virtual 10-year-old girl, to ensnare more than 1,000 online pedophiles in 65 countries. And then there’s Roxxxy, the female-shaped sex robot with a throbbing heart and five adjustable behavioral styles. And for those who find that too impersonal, we can have remote coitus with our own beloved using synthetic genitals that are connected online.

Our devices are also gaining more autonomy. Perhaps they will soon demand it? If we neatly time and plan our meetings in our calendar, Google Now automatically searches available travel routes and gives us a call when the departure time is approaching. We are still waiting for the digital assistant who dutifully worries and asks us whether we’re not too tired for such a late appointment, but an Outlook calendar on your iPhone can warn you that you have a very busy day ahead. Real driverless cars have already travelled thousands of kilometers on public roads in California and in Berlin’s city center. Eventually the U.S. military wants to build drones that can independently make the decision to kill.

We are the new resource

These changes are a new step in the information revolution, when information technology is emerging as intimate technology. Whereas the raw materials of the Industrial Revolution were cotton, coal, and iron ore, the raw material of the intimate technological revolution is us. Our bodies, thoughts, feelings, preferences, conversations, and whereabouts are inputs for intimate technology.

When the Industrial Revolution steamrolled over England and then throughout Europe and the United States, it created enormous havoc with two factors of production: labor and land. The result was social and political paroxysms accompanied by enormous cost and suffering. Those of us who now enjoy the prosperity and material comforts made possible through industrial transformation might judge the pain a reasonable sacrifice for the benefits, but we would be wise to remember how much pain there was and how we might be affected by a similarly convulsive transformation. Now that the Information Revolution thunders even faster over us, keep in mind that we and our children are the most important production factor, that our intimate body and mind are the raw material for new enterprises and capital. And as with the old Industrial Revolution, this one can destabilize the institutions and social arrangements that hold our world together. At stake here are the core attributes of our intimate world, on which our social, political, and economic worlds are built: our individual freedom, our trust in one another, our capacity for good judgment, our ability to choose what we want to focus our attention on. Unless we want to discover what a world without those intimate attributes is going to be like, it is vital that we develop the moral principles to steer the new intimate technological revolution, to lead it in humane ways and divert it from dehumanizing abuses. That is our moral responsibility.

The insight that a technological revolution is turning our intimate lives inside out in ways that demand a moral response is not yet common. But unless such awareness grows quickly, there will be no debate and no policy. And without debate and policy, we the people are at the mercy of the whims and visions not only of the technology creators, the profit makers, and government and security services, but also of the emergent logic of the technological systems themselves, which may have little to do with what their creators intend.

To start this very necessary and overdue debate, I suggest this proposition: Let us accept that we are becoming cyborgs and welcome cyborgian developments that can give us more control over our own lives. But acceptance of a cyborg future does not equal blind embrace. Thoughtlessly embracing all current developments will turn us into good-natured high-tech puppets, apparently happy as we pursue our perfect selves but gradually losing our autonomy. This is the path to a world made not for the difficult strivings of democracy and civil society, but for the perfectly efficient functioning of the marketplace and the security state.

Seven ways to become wild cyborgs

Children and adults need to retain a healthy degree of wildness, cockiness, playfulness, and sometimes annoying idiosyncrasy. We should aspire to be wild cyborgs. The challenge will be to apply intimate technology in such a way that we become human cyborgs. I propose that we adhere to the following seven theses as a guide to our interactions with technology.

1 Without privacy we are nothing. Our data should therefore belong to us. Without privacy we cannot be free, because we cannot choose to act without our choices and our actions being known and thus subject to unseen influence and reaction. Data about our actions and decisions are continuously captured and funnelled by commercial companies, state authorities, and fellow citizens. The large data owners say that’s not bad, and many users just parrot those words because they “have nothing to hide.” But if that is true, why do these same people lock their front door and not talk publicly about their credit card security code? Too much privacy has been lost in recent years. Many people have unwittingly donated their social data to big companies in return for social media services. It is high time that we swap our childlike acceptance of our loss of privacy and autonomy for a strong adult resistance. That implies consciously dealing with the ownership of our personal data, because they are of great economic, personal, and public value.

Over the coming years, the way we will deal with our biological data provides the litmus test for whether we will be able to keep alive the concept of privacy and ensure that our physical and mental integrity are vouchsafed. The first signs are discouraging. Millions of people have already started to hand out their biological data for free to all kinds of companies. Via sensors built into consumer products such as smartphones and exercise monitors, massive amounts of biological data such as fingerprints (for example, to unlock your iPhone 5s), heart rate, emotions, sleep patterns, and sexual activity can be collected. The Advanced Telecom Research Institute (ATR) showed that wrist bands with accelerometers can be used to track more than a hundred specific actions, such as washing hands or giving an injection, such as performed by nurses, for example. Data on how we walk, collected by smart shoes, can be used to identify us, track our health, and even reveal early signs of dementia. We should rapidly become aware of the richness and potential sensitivity of our biological data, in particular in combination with social data. State security services are very interested in that type of information, and so are companies that want to market their products to you or to make decisions about your eligibility for credit, employment, or insurance. The way we handle the privacy of ourselves and others over the next five years will be decisive for how much privacy future our generations will have. We should realize that by abdicating privacy we will lose our freedom.

2 We must be aware of who is presenting information to us and why. Freedom of choice has always been a central value of both the market economy and democracy. Personalization of the supply of information is putting our online freedom of choice—and we are always online—under pressure. With every click or search, we donate to the Internet service providers information about who we are and what we do. That type of information is used to build up individual user profiles, which in turn allow the providers to continually improve their ability to persuade us to do what is in their commercial or political interest and to tailor such persuasive power for each individual. What makes such propaganda and advertisement different from what we have faced up to now is that it is ubiquitous and often invisible. It can also be covertly prescriptive, pushing us to make certain choices, for example with devices that are getting better and better at mimicking human speech, faces, and behaviors to seduce and fool us. For example, psychology experiments suggest that we are particularly open to persuasion by people who look like us. Digital images of one’s own face can now be mixed, or “morphed,” with a second face from an online advertisement in ways that are not consciously discernible but still increase one’s susceptibility to persuasion. So to protect our freedom of choice we have to be aware of the interests at stake and who benefits when we make the choices we are encouraged to make. We should therefore demand that the organizations behind the devices be transparent about the way our information supply is programmed and how the software and interfaces are being used to influence us. Precedent for how to organize this might be found in health care regulations, which require that medicines be accompanied by information on side effects and that doctors base their actions on the informed consent of their patients. Maybe every app should also have an online information sheet that addresses questions such as: How is this piece of software trying to influence its users? Which algorithms are used, and how are they supposed to work?

3 We must be alert to the right of every person to freely make choices about their lives and ambitions. Individualism forms the foundation of our liberal democratic societies. So to a large extent, it should be up to individuals to choose how to employ intimate technologies for pursuing their aspirations. This position is strongly advocated by groups such as transhumanists, bio-hackers, and quantified “selfers” that promote self-emancipation through technology. But there is no such thing as self-realization untouched by mass media, the market, public opinion, and science and technology. What image of our self are we trying to become, and where does that image come from? Many markets thrive on a popular culture that challenges normal people to become perfect, whatever that means.

We are losing the ability to just be ourselves. As more technical means become available to enhance our outward appearance and physical and mental performance—our wrinkleless skin, our rippling abs, our flamboyant sex life, our laserlike concentration—firms will pursue more effective ways to seduce us to strive for a perfection that they define. Having agreed to let marketers tell us how to dress and wear our hair, are we content to have them define how we ought to shape our bodies and minds? Are we really realizing ourselves if we strive to become “perfect” in the image created by marketers? We need to protect the right to be simultaneously very special and very common, without which we may lose the capacity to accept ourselves and others for what we are. We should cherish our human ambition to strive for our own version of perfection, and also nourish ways to accept our human imperfections.

To stay human we have to keep our social and emotional skills, including our ability to trust in people, at a high level.

4 The acts of loving, parenting, caring for, and killing must remain the strict monopoly of real people. The history of industrial advance has also been a history of machine labor replacing human labor. This history has often been to our benefit, as drudgery and danger have been shifted from humans to machines. But as machines acquire more and more human characteristics, both physical (such as realistic avatars and robots) and mental (such as social and emotional skills), we must collectively start addressing the question of whether all the kinds of human activities that could be outsourced to the machine should be outsourced. I believe we should not outsource to machines certain essential human actions, such as killing, marriage, love, and care for children and the sick. Doing so might provide wonderful examples of human ingenuity, but also the perfect formula for our dehumanization, and thus for a future of loneliness. Autonomous drone killing might be possible someday. But because a machine can never be held accountable, we should ensure that decisions on life and death must always be taken by a human being. As humans we are shaped in our intimate relationships with other humans. For example, caring for others helps us to grow by teaching us empathy for those who need care and the value of sacrifice in our lives. If we start to outsource caring on a large part scale to technology, we run the danger of losing a big of the best part of our humanity.

5 We need to keep our social and emotional skills at a high level. Use it or lose it. We all know that if we don’t exercise our physical body we will lose strength and stamina. This is also true for our social and emotional skills, which are developed and maintained through interaction with other people. We are now entering a stage in which technology is taking on a more active role in the way we interact, measuring our emotions and giving us advice about how to communicate with others. In her book Alone Together: Why We Expect More from Technology and Less from Each Other, Sherry Turkle, who for decades has been studying the relationships between people and technologies, argues that the frequent use of information technology by young people is already lessening their social skills. Her fear is that our expectations of other people will gradually decrease, as will our need for true friendship and physical encounters with fellow humans.

There are plenty of signs that we should at least take her warnings seriously. What is at stake is our ability to trust our fellow humans. Our belief that someone is reliable, good, and capable is at the core of the most rewarding relationships we have with another human being. Technology can easily undermine our trust in people; think about emotion meters that check the “true” feelings of your partner, or life-logging technology to check whether what someone is telling you is really true. To stay human we have to keep our social and emotional skills, including our ability to have trust in people, at a high level. If we don’t do that, we run the risk that face-to-face communication may become too intimate an adventure and that our trust in other people will be defined and determined by technology.

6 We have the right not to be measured, analyzed, and coached. There is great value in learning things the hard way, by trial and error. In order to be able to gain new perspectives on life, people need to be given the opportunity to make their own, sometimes stupid and painful, mistakes. New information technologies, from smart toothbrushes and Facebook to digital child dossiers and location tracking apps, provide ample opportunities for parents to track the behavior and whereabouts of their children. But by doing so, they deprive their children of the freedom that helps them to develop into independent adults, with all the ups and downs that go with it. Can a child develop in a healthy moral and psychological way if she knows she is continuously spied upon? Does the digital storage of all our “failures” put in danger the right that we must have to make mistakes? The ability to wipe the slate clean, to forgive ourselves and to be forgiven, to learn and move on, is an important condition for our emotional, intellectual, and moral development. This digital age forces us to ask ourselves how to ensure that we preserve the capacity to forget and to be forgiven.

The more general question is whether it will remain possible to stay out of the cybernetic loop of being continuously measured, analyzed, evaluated, and confronted with feedback. Driven by technology and legitimated by fear of terrorism, the reach of the surveillance state has expanded tremendously over the past decade. At the same time, a big-data business culture has developed in which industry takes for granted, in the name of efficiency and customer convenience, that people can be treated as data resources. This culture flourishes in the virtual world, where Internet service providers and game developers have grown accustomed to following every user’s real-time Web behavior. And as with the Internet, shopkeepers can monitor the behaviour of the customers in their physical shops through wifi tracking. Samsung is monitoring our viewing habits via their smart televisions. And if we start to use computer glasses, Samsung and Google may even monitor at whom and what we glance.

The state is surveying its citizens, companies are surveying their customers, citizens survey each other, and parents and schools use all the means available to survey children. Such a surveillance society is built on fear and mistrust and treats people as objects that can and must be controlled. To safeguard our autonomy and freedom of choice, we should strive for the right not to be measured, analyzed, or coached.

7 We must nurture our most precious possession, our focus of attention. Economics tells us that as human attention becomes an increasingly scarce commodity, the commercial battle for our attention will continue to intensify. Today, “real time is the new prime time,” as we incessantly check our email and texts and equally incessantly send out data for others to check. Many new communications media divert our attention away from everyday reality and toward a commercial environment in which each content provider attempts to optimally monopolize our focus. On the Internet, we all have become familiar with commercial ads that are tailored to our preferences. In the near future, smartphones, watches, eyewear, businesses, and a growing circle of digital contacts will each demand more and more of our attention during everyday activities such as shopping, cooking, or running on the beach. And since attention is a scarce resource, paying attention to one thing will come at the expense of our attention to other things. Descartes articulated our essence, “I think therefore I am,” and the digital age forces us to protect our freedom from continual intrusion and interruption, to guard our own unpolluted thoughts, our capacity to reflect on things in our own way, because that is what we really are. We must cherish what is perhaps our most precious possession, the determinant of our individual identities: our ability to decide what to think about or just to daydream.

The intimate technological revolution will remake us by using as raw material data on our metabolism, our communications, our whereabouts, and our preferences. It will provide many wonderful opportunities for personal and social development. Think of serious games for overcoming the fear of flying, treating schizophrenia, or reducing our energy consumption. But the hybridization of ourselves and our technologies, and the political and economic struggle around this process, threaten to destabilize some qualities of our intimate lives that are also among the core foundations of our civil and moral society: freedom, trust, empathy, forgiveness, forgetting, attention. Perhaps there will be a future world where these qualities are not so important, but it will be unlike our world, and from the perspective of our world it is hard to see what might be left of our humanity. I offer the above seven propositions as a good starting point to further discuss and develop the wisdom that we will need to stay human by becoming wild cyborgs in the 21st century.


Rinie van Est () is coordinator of technology assessment at the Rathenau Instituut in the Netherlands.


Court Sides with Whales

The United Nations’ highest court has halted Japan’s large “research whaling” program in the Southern Ocean off Antarctica. But the decision will not stop all whaling by Japan or several other countries, and creating a “whale conservation market” that sells sustainable “whale shares,” as described in Issues, may provide an effective alternative to legal or regulatory mechanisms to protect global whale populations.

Promoting Free Internet Speech

Speaking during a visit to Beijing, Michelle Obama declared that freedom of speech, particularly on the Internet and in the news media, provides the foundation for a vibrant society. Striking a similar theme in Issues, Hilary Rodham Clinton, then the U.S. Secretary of State, said that protecting open communication—online and offline—is essential to ensuring the fundamental rights and freedoms of people everywhere.

Protecting the Unwanted Fish

The conservation group Oceana has released a new report detailing how “bycatch” is damaging the health of U.S. fisheries. Ecologist and writer Carl Safina has examined this and related problems in Issues, calling for a new era of fisheries management that will beef-up old tools and adopt an array of new “smart tools” to protect these valuable and threatened resources.

Alternate Routes to Career Success

Education expert Michael J. Petrilli argues in the online magazine Slate that many students would be best served not by focusing them on pursing a traditional college education but rather by providing them with sound early education followed by programs in high school and at community colleges that help them develop strong technical and interpersonal skills. Issues has examined various ways of structuring such alternative routes to the middle class, including the expansion of occupational certificate and apprenticeship programs.

Progress in Childhood Obesity, yet Challenges Remain

A major new federal health survey has reported a 43% drop in the obesity rate among young children over the past decade, but older children and adolescents have made little or no progress. In Issues, Jeffrey P. Koplan and colleagues presented lessons from an earlier groundbreaking study by the Institute of Medicine on what the nation should be doing to address this epidemic and its higher risks for serious disease.

Pitbull Promotes Education

Along with making school attendance compulsory, states and cities should develop programs to keep students—especially those at risk of absenteeism and poor performance—engaged in learning from elementary grades through high school graduation, two education experts have noted in Issues. In an innovative application of this spirit, the pop star Pitbull is supporting a charter school in Miami that engages students by drawing its lessons in all subjects, including science and math, from the world of sports.

Immigration and the Economy

The financial services company Standard & Poor’s has recently released a report suggesting that increasing the number of visas issued to immigrants with technical skills will boost the U.S. economy and even spur job growth for native-born workers. Several Issues articles have made similar cases (here and here), but an expert in labor markets has also argued that the nation is producing more than enough quality workers in scientific and engineering fields—and policymakers and industry leaders should proceed accordingly.

Leveling the Playing Field for Women in Science

Issues has explored the status of women in science from several angles, including in an examination of how to plug the leaks of both women and men in the scientific workforce, and in a personal essay about the choices women often face when confronting the “system” of science. Many of these and other ideas are explored in The Chronicle of Higher Education by Mary Ann Mason, co-author of the recently published book Do Babies Matter? Gender and Family in the Ivory Tower.

Reconstructing the View

Reconstructing the View

Reconstructing the View

The landscape has been a source of artistic exploration and contemplation since the earliest cave drawings. Represented in paintings and photography as well as film and the tourist’s snapshot, a variety of perspectives have all contributed to building within our collective imagination a sense of the places we inhabit and visit, potentially sparking our awe and imagination. Add to that the information gathered by the observations of geologists, cartographers, seismologists, and others trained in scientific observation, and we have a multifaceted and layered understanding of the land. An informed artist can remind us of how our perceptions are constructed and thus cast new light on the debates that arise over the meaning and value of particular landscapes and the importance of protecting them.

Since 1995, the collaborative team of photographers Mark Klett and Byron Wolfe has explored questions of constructed perception, time, and change. As early as 1997, they focused their visual inquiry on the Grand Canyon and surrounding areas. They analyzed the work of early creative practitioners who have documented the region for various purposes and identified the exact locations portrayed in these historic photographs and drawings. For example, they discovered that the 1882 lithograph by draftsman William Henry Holmes of the view of the Marble Canon Platform was so precise that it allowed them to create and insert new images into the original, matching the forms. The circular images that Klett and Wolfe chose to insert in this particular piece were taken through a military spotting scope, suggesting another perspective in viewing the land. From the exact same geographic point used by Holmes, they created a new photograph that incorporates the original view. A digital version of the historic image was inserted within the contemporary photograph, asking the viewer to consider the changes that have happened over time, not only in the land but in our perception of it.

This artistic exploration resulted in a body of work published in the book Reconstructing the View: The Grand Canyon Photographs of Mark Klett and Byron Wolfe. Wolfe is the Program Director for Photography at the Tyler School of Art Center for the Arts at Temple University in Philadelphia, Pennsylvania, and a former student of Klett’s. Klett, Regents’ Professor of Art at Arizona State University in Tempe, Arizona, worked as a geologist before pursuing photography. According to Klett, what draws him toward being an artist is that it enables him “to move into territories that would normally be seen as somewhat outside of the limits of any traditional practice, or at best at the outskirts of a discipline’s interests. Artists can often fill in the voids between disciplines and provide the glue to stick them together in unconventional ways.” Reconstructing the View reveals the combined invention of these two artists, offering provocative ways to think about the land, its history, and our role in “seeing” it. Collectively and individually Klett and Wolfe have collaborated with, been inspired by, and/or consulted with geologists, paleontologists, archeologists, botanists, ethnobotanists, writers, poets, sculptors, and historians. By bringing together their collective backgrounds, enriched by the insights of others, the artists push one another to create work that is more radical and more subversive than they might have created individually. “Collaboration is the amplification of ideas,” according to Wolfe.


JD Talasek, Director, Cultural Programs of the National Academy of Sciences

Real Numbers

Real Numbers

Poverty and vulnerability to storms

Typhoon Haiyan, which hit the Philippines on November 2, 2013, left behind more than 6,000 dead and displaced a T population the size of Los Angeles. The scale of the damage is a result not only of the severity of the storm but also of the vulnerability of the millions of impoverished people living in the Philippines. Fragile food security leads to acute malnutrition with long-term effects, limited health and sanitation infrastructure results in the spread of disease to more victims, and the poorest surviving households are pushed further into debt. This vulnerability has been compounded throughout a decade that has included a succession of major typhoon disasters.

Typhoon is a regional name for a tropical cyclone, and it’s a familiar word in the Philippines, which has been the country most often struck by these disasters during the past decade and the past half-century. Other countries that have suffered catastrophic storm damage—Bangladesh, China, Myanmar, and India—are also examples of how poverty increases vulnerability.

Philippine vulnerability

In the Philippines exposure to typhoons is inescapably high. The geography makes coordinated efforts to reduce disaster vulnerability critical. But over the past half-century mortality per million people shows no appreciable decline. Vulnerability resulting from growing coastal populations living in unsafe housing combined with environmental degradation can outpace even the largest evacuation and response plans.

Source: EM-DAT and The World Bank

Recent cyclone disasters in India and Bangladesh reveal major improvements in disaster preparedness. In India, Cyclone Phailin in 2013 caused 38 deaths, a vast improvement from the 10,000 deaths resulting from a similar storm in 1999. In 2007 Cyclone Sidr in Bangladesh caused the enormous loss of 4,400 lives, but this is far fewer than the almost 140,000 lives lost in a comparable storm in 1991. But similar improvements in community resilience are yet to be made in many other regions, as was tragically apparent in Myanmar in 2008 when a storm claimed more than 138,000 lives. Disaster preparedness programs that can evacuate and shelter millions of people take years to implement.

An illustration of the socio-economic capacity needed to protect against disasters can be seen in the mortality data for the affluent storm-prone countries Japan and the United States. The only outlier is the high mortality caused by Hurricane Katrina, which affected one of the poorest regions in the United States. Storms are inevitable, but the resulting death and destruction can be limited with adequate preparation. Poor people, especially in developing countries, are particularly vulnerable to storms. They deserve better protection.

Data for the figures are from the Emergency Events Database (EMDAT), maintained by the World Health Organization Collaborating Centre for Research on the Epidemiology of Disasters (CRED). Population and gross national income (GNI) per capita data are from the World Bank world development indicators.

Deadly storms

Globally, cyclone disaster mortality has exceeded 10,000 deaths in 11 of the past 50 years. Four years in the past decade have exceeded 5,000 deaths. These figures show that cyclones are an enormous and relentless threat to human life.

Source: EM-DAT

Poverty matters

The occurrence of a cyclone remains a deadly hazard for people living in countries with low gross national income per capita such as the Philippines ($2,470), India ($1,530), and Bangladesh ($840). These storm events are not included because they far exceed the scale shown here: Bangladesh in 1965, 1970, 1985, and 1991; India in 1971 and 1977. By contrast, wealthy countries such as Japan and the United States suffer relatively low mortality rates when they are hit by powerful storms because they can afford to invest in the infrastructure and other measures needed to reduce their vulnerability.

Source: EM-DAT and The World Bank


Travis R. Doom is a program specialist at Arizona State University’s Consortium for Science, Policy, and Outcomes office in Washington, DC.

The Politics behind China’s Quest for Nobel Prizes

The Politics behind China’s Quest for Nobel Prizes

JUNBO YU

The Politics behind China’s Quest for Nobel Prizes

China is applying its strategy for winning Olympic gold to science policy. It may be surprised by the outcomes—but overall, the world will benefit.

Skeptics about the capacity of China to join the ranks of the industrialized nations should be challenged by the recent rise of the Chinese high-tech business, including the high-speed train industry, telecommunications service providers such as Huawei, IT service providers such as Lenovo, new market-leading S energy equipment suppliers such as Suntech, and the competitive success and admiration, even fear, that these businesses have spurred across the world. Yet skepticism is not entirely unwarranted. Some inconvenient truths about science and technology development in China stand in the way of its ambitions. Most prominent among these, as noted by Xuesen Qian, the “Father of Chinese Rocketry” (who received his Ph.D. from MIT and returned to China in 1955), is the failure of Chinese universities and research institutes to cultivate world-class creativity and innovation among their scientists. To Chinese leaders, an increasingly aggravating illustration of this truth is that no homegrown scientist from the mainland has claimed a Nobel Prize in Physics, Chemistry, or Medicine.

The Chinese Communist Party (CCP) that today governs the world’s second largest economy and second largest R&D budget is determined to correct this failure. In October 2013, the CCP Organizational Department identified six scientists as China’s “outstanding talents,” the top tier of the “Ten Thousand Talents Program,”and the most likely candidates for a Nobel Prize. Scientists who achieve this rarified level of recognition will benefit from greater autonomy in setting their research agendas, secure research funding to be used at their discretion, and administrated and assessed under terms negotiated directly between the government and scientists.These seemingly ideal privileges are part of the larger effort to promote China’s overall innovation capacity. But only one goal of this program stands out as explicit and measurable: the Nobel Prize.

The state-driven charge toward the Nobel Prize is unprecedented and unparalleled in science policy. Today’s fierce competition among countries for technological advantage, reflected in a diversity of national science, technology, and innovation (STI) policies, has become a bit imprudent and extravagant—national governments are overconfident in their bets on tomorrow’s revolutionary technologies, while the cost/benefit effects of their tremendous inputs are of less concern. But no other country uses the Nobel Prize to anchor the success of a national innovation strategy. Such a strategy appears unbalanced, short-sighted, and utterly antithetical to the principle that creativity and innovation in scientific research must be driven by the curiosity of the scientist. In short, this narrow, nationalistic idea appears to say more about CCP politics than about STI policy. What then are the politics?

The continuing quest for legitimacy

Rulers of authoritarian countries have to justify their legitimacy, and the CCP is no exception. Its legitimacy emerges from several historic sources: first through achieving peace, unity, and freedom from exploitation by Western colonialists in 1949; relying on Mao’s personal charisma amid the Great Famine and the Cultural Revolution; and most recently by delivering rapid GDP growth since the “Reform and Opening Up” of the economy initiated in the late 1970s.

But the Party has pursued other, less apparent or understood strategies for strengthening its legitimacy. One of these is to close the technological gap between China and the West.

Since its defeat during the First Opium War in 1840, all Chinese regimes have suffered military disadvantage due to inferior technological capacity. Constantly bullied and intimidated by various foreign forces, both the elites and the general public have been obliged to pursue effective measures to catch up technologically and thus to improve national security. The “Self-Strengthening Movement” initiated by the Qing Empire from 1861 to 1895 was the first state effort at technology catch-up, with measures ranging from the financing of advanced public education to the establishment of modern arsenals. The eruption of the Sino-Japanese War (1894–1895) terminated this attempt; indeed, the inability of the government to deliver effective technology catch-up partly explains its failure in the war, and drastically intensified the legitimacy crisis of the empire.

The problem of legitimacy was further amplified by the painful war to resist the Japanese invasion between 1931 and 1945. Among Chinese domestically and overseas, a deep desire for national security and technological superiority was growing. The CCP cleverly and effectively took advantage of the combination of widespread feelings of inferiority and the nation’s Cold War frontier position to justify the state’s priority of developing the defense industry at all costs. The atomic bomb, the hydrogen bomb, the intercontinental ballistic missile, the nuclear submarine, the human-made satellite: All of these achievements were tied to tensions between China and the United States or the former Soviet Union, or promoted to mitigate the catastrophe of miscalculated domestic policies, such as the Great Leap Forward or the Cultural Revolution. In retrospect, although the outcomes of the CCP’s major domestic policies between 1949 and 1978 were devastating, even to its own legitimacy, the conspicuous development of China’s defense industry played a crucial yet overlooked role in preserving the CCP’s legitimacy by tapping into the public desire for independence, national security, and technology catch-up.

The end of the Cold War and the diplomatic and economic opening-up begun by Deng Xiaoping, however, have increasingly obliged the CCP to engage in a new strategy for playing the catch-up card. This is first because China’s national security situation clearly improved because of the collapse of the Soviet Union and the establishment of broad mutual economic interests through trading with most of its neighbors and with the United States in particular. Today, despite years of nationalist propaganda about the threat of Taiwanese and Tibetan separatism, territorial disputes with East Asian neighbors, and the U.S. strategy of containment, very few Chinese would consider their national sovereignty to be in danger. Second, as the new “World’s Factory,” China’s relatively low-skilled production model and its dependence on imported technology across various industries have stimulated severe public criticism of the government’s underinvestment in nondefense R&D as well as science and engineering education. The legitimacy conferred by technological catch-up today resides more with the success of economic competitiveness than with growing military strength.

Proving itself to be pragmatic and adaptable, in the early 1980s the CCP launched a series of concerted and escalating policies aimed at convincing the public of its commitment to the pursuit of cutting-edge technological and innovation capabilities. At the input end, the impact of these policies has been extraordinary: A recent study shows that China’s R&D expenditures as a percentage of GDP have increased from below 0.6% in 1982 to nearly 2% in 2013. Assuming a continuation of China’s 18% annual growth rate in R&D spending since 2000, it is now on track to overtake the United States in total (public and private) R&D spending by 2022. Government R&D funds, in particular, have been growing at a pace exceeding the expectations of even the most enthusiastic scientists, and China now not only holds the largest pool of science and engineering researchers in the world, but has triggered a reverse brain drain by luring back talent who studied and worked abroad with the promise of government research largesse.

Instead of focusing on research and building strong links to other scientists and researchers for collaboration, Chinese scientists have developed a strategic culture that focuses on developing vertical networks with powerful bureaucrats.

The landscape at the output end, however, remains genuinely disappointing: Journal publications by Chinese researchers included in the Science Citation Index (SCI) grew from a negligible share in 2001 to 9.5% in 2011, second in the world to the United States, but with few highly cited articles, indicating trivial creative value and low impact on the R&D enterprise. Chinese firms still depend on foreign sources for their core technologies, paying tens of billions of dollars each year to purchase overseas intellectual property. Massive research spending has not led to transformative innovations and products that can be truly commercialized; there is little potential for fostering a Chinese equivalent to Apple. This contrast between inputs and outputs not only makes it clear to the CCP that developing technological and innovation capacity requires more than the mere accumulation of research capital and labor, but also pressures the Party to seek more effective approaches to technological catch-up in its ongoing effort to defend its legitimacy.

Thus, the state is now seeking to replicate what it has learned in another domain of catch-up: Olympic sports. Here, the CCP has impressed the public and bolstered its legitimacy through policies that have moved Chinese athletes into the leading rank of Olympic gold medalists. But have the right lessons been learned? In the sports case, the Party succeeded at the Olympic level despite its refusal to relinquish centralized control over the organization of athletic activities, a failing that has inhibited the development of a truly popular and professional athleticism. And as with sports, the conditions necessary to cultivate scientific and technological creativity and innovation have some inherent conflicts with authority and centralized power that the CCP’s legacy can hardly accommodate. Nonetheless, the successful pursuit of Olympic gold has had a significant impact on the Party’s public image and legitimacy that the CCP believes will apply in the science realm as well. Given the success of what is often referred to as the CCP’s “Olympic strategy,” why not develop a comparable “Nobel Prize strategy”?

Ambivalence about scientists

The fundamental logic of using Nobel Prizes to measure national scientific ability and catalyze innovative creativity is untenable, because it inappropriately equates the exceptional abilities of a very few individuals with the nation’s scientific capacity, while confusing scientific discovery with technological innovation. This strategy will ultimately run into trouble, regardless of whether Chinese scientists selected by the state can claim a couple of Nobels. Yet even if this bold charge toward Nobel Prizes fails to achieve its desired impacts, it may well lead to unintended consequences that will have profound ramifications at the national and international levels.

Unexpected changes are likely to show up first in the relationship between leading scientists and the Party. Historically, the CCP has maintained firm control over Chinese scientists, not only because they are indispensable to improving legitimacy through efforts such as technology catch-up, but also because scientists have a professional inclination toward intellectual freedom, which presents a perennial threat to authoritarian rulers. In light of this tension, the talents of the Chinese science community have best been mobilized when scientists are concentrated in “bunkers,” or megaprojects such as the space program, as endorsed by the Party-state. But scientists are ruthlessly crushed as soon as they engage in political opposition, as occurred throughout Mao’s regime and again after the Tiananmen Square protest of 1989.

Although the CCP has become more tolerant of Western concepts such as transparency, accountability, and public participation in politics, as well as the push for social-political reform from scientists, the Party retains strict control over the careers and resources of scientists by handpicking their leaders and manipulating the allocation of research funds. The result of this state control over the science community unfolds in typical fashion: Instead of focusing on research and building strong links to other scientists and researchers for collaboration, Chinese scientists have developed a strategic culture that focuses on developing vertical networks with powerful bureaucrats. It is an open secret in China that to obtain major grants, doing good research is much less important than holding a chief administrative position or schmoozing with experts enlisted as funding committee members. The discretion of CCP officials outweighs peer review in assessing the significance of research outcomes, so making alliances with bureaucrats is necessary for scientists. This culture of strategic network-building is so pervasive that even returnees from abroad are seldom exempted and quickly become part of it.

As this research culture and the logic of state control that underpins it act to increasingly stifle creativity, corrupt scientific ethics, and squander public money, the Party has begun to realize that it is endangering part of its own legitimacy by compromising China’s potential for continued technology catch-up, including its desire for Nobel Prizes. But allowing self-governance among the scientific community can be harmful to the monopoly of power, and must be denied. Thus the Party harbors deep ambivalence toward independent scientific research, experiencing both desire and fear. The result has been a compromise of principles implemented through the “Ten Thousand Talents Program.”“Outstanding talents” will be given secure funding and expanded autonomy to develop research designs, operations, and evaluations, but the list of these “outstanding talents” is handpicked by the Party’s Organizational Department. This is not to say that Party officials will be the only ones to make the assessments of scientific talent and promise; nonetheless, the political reliability of a scientist will be an additional prerequisite for the reward, and the achievements of the scientist thereafter will entail a debt to the Party. Apparently, the CCP believes it has found a way to resolve its ambivalence: Speed up the national drive toward Nobel Prizes while reserving the Party’s claim to these prizes, and excite elite scientists with extra freedom while retaining de facto control, given their paucity of numbers and manageable monitoring costs.

Appointing national champions and making them heroes and role models for the public are strategies often used by authoritarian states to mobilize political campaigns. However, on many occasions, such as the defections of former Soviet Union and Eastern Bloc celebrities, including artists, athletes, and scientists during the cold war, they end up destabilizing the regime by creating and escalating tensions between the state’s will and the liberal reasoning of heroic individuals. Indeed, this is one lesson from the experience of sports, as the new generation of Chinese stars, such as Li Na (tennis), Yao Ming (basketball), and Liu Xiang (track and field), no longer shy away from expressing their reservations about the Party’s obsession with gold medals and openly criticize how state intervention has distorted genuine athletics. The push for gold at the Olympic Games, and the drive for Nobel Prizes, may wellwork against the authoritarian state in the end. Superiority on level playing fields among nations cannot be sustained without openness, and once openness begins, however cautiously at first, it threatens the monopoly of information by the authoritarian state and may eventually become irreversible.

In reality, the Chinese scientists selected as “outstanding talents” will have to be given additional autonomy so that their research can fully conform to international research norms and can be recognized in the competition for Nobel Prizes. As their integration into the international science community increases, with or without Nobel Prizes, these scientists and their domestic collaborators, assistants, and students will find themselves pushing back against and likely rejecting the norms of the Chinese research culture.Through their actions and activities if not their voices, these scientists will become a force for reform. Although the Party’s Nobel strategy for chasing legitimacy and national pride may thus serve its purpose in the short term, in the long run it must inevitably threaten the Party’s ability to maintain control over public discourse and thus over civil society.

Who would benefit?

China’s political commitment to science and innovation serves a political purpose in America, too. The threat of Chinese competition on the advanced technological front, which I highlighted at the beginning of this essay, is often invoked by U.S. policy, scientific, and business leaders as a reason to increase government spending on research and innovation, to bolster immigration policies that encourage foreign-born scientists to stay in the United States, and so on. In a 2011 Wall Street Journal op-ed, Microsoft’s former chief operating officer went so far as to suggest that the United States was more like a developing country than China, if judged by the latter’s robust investments in its emerging scientific and innovation prowess.

Here my point is different. U.S. policymakers should welcome and support the Party’s dash for Nobel Prizes. In betting on Nobels, the Party faces a critical tradeoff between legitimacy and authority. If the Party hopes to enhance its legitimacy through the reputational benefits of future Nobel laureates, it must first unlock the potential of Chinese scientists by allowing more autonomy and reducing interventions, which in turn will inevitably diminish the Party’s authority. The CCP’s temporary solution seems to be autonomy preconditioned on alleged loyalty and confined to a limited number of scientists. However, being intellectuals, the more these leading Chinese scientists are engaged with the international science community, the more needs, resources, and strength they can mobilize to overcome the institutional weaknesses of China’s research culture and push for domestic political reform. Such processes will oblige the state to become more accountable to its public, thus empowering and stabilizing a transition away from authoritarianism. This is not to say that the CCP won’t use the outcome of its Nobel campaign to boost its image or nationalist pride. But to the extent that U.S. politicians seek to exploit the threat of Chinese scientific competition to advance their own political ends, they will actually lend credence to the pursuit of similar ends by the CCP.

I am arguing that China’s bet on the Nobel Prizes will subject its research system to an open, liberal, and rules-based competition that will erode the Party’s authority in the long run. But I am also suggesting that U.S. fears about Chinese scientific and technological competitiveness remain overstated in the first place. As long as the CCP is reluctant to pursue enhanced legitimacy through genuine political reform, the Chinese research culture I described earlier will tend to prevail among most Chinese scientists and researchers, due to the powerful incentives of political patronage. This culture dampens the prospects for real scientific breakthroughs. Nor is it unrelated to China’s poor performance in commercializing and industrializing inventions, the result of an “industrial strategic culture” that privileges firms with political patrons and effectively promotes a destructive entrepreneurship that prefers short-term profits, cronyism, and excessive diversification. So even if, by privileging a few of its top scientists, the Party does manage to garner some Nobel Prizes, the entrenched culture of science and innovation in China may still help industrialized countries with more efficient innovation systems to benefit from China’s Nobel science more than would China itself.

Nobel Prizes are awarded on the basis of creative merit, as well as the breadth of impact on scientific research and human welfare. Indeed, a scientific or technological advance will usually be exposed to the world for a lengthy period of scrutiny, often including complex processes of commercialization and industrialization, before its worthiness for a Nobel Prize can be fully assessed and endorsed. This complex context within which Nobel-worthy research results are in effect subject to a test of scientific and societal significance rules out the possibility of protectionism of scientific discoveries on China’s part, as that would undermine the competitive potential of the discoveries. Unlike gold medals in the Olympic Games, which have limited value other than fame and national pride, the research that is rewarded by Nobel Prizes in science has vast spillover effects for human welfare that cannot be exclusively exploited by the winners. The Chinese nationality of a Nobel Prize winner imposes no extra constraints on the ability of the rest of the world to benefit from the scientist’s breakthrough. Meanwhile, the state-organized infringement of intellectual property and rampant industrial espionage that have in the past antagonized developed countries in their dealings with Chinese partners also become less tempting because they undermine the broader effort to establish a national reputation for scientific originality.

In contrast to its enthusiasm for Nobel Prizes in science, the Party sought to suppress public discussion of the Nobel Peace Prize awarded to the incarcerated political activist Liu Xiaobo. It also toned down the official reaction to Mo Yan’s Nobel Prize for Literature. These orchestrated reactions mirror the Party’s longstanding aversion to public discourse on universal values, and its preference for value-neutral products (such as athletic prowess and, supposedly, scientific breakthroughs) as proof of its competencies and authority. However, scientists as human beings cannot be value-free or value-neutral. The fundamental paradox in China’s bet on Nobel Prizes is that the Party must free its scientists and researchers from their patron/client professional culture before their creativity and potential can be released. But if the Party were to remove restrictions on its scientists in order to strengthen its legitimacy, a more internationalized and independent science community will result, with a consequently increasing influence on domestic civil society that in turn will raise more diversified challenges to the Party’s legitimacy. China’s Nobel Prize strategy is a thus classic win-win for the United States and the West, whose policymakers should do what they can to encourage it.The strategy cannot damage Western countries’ competitive edge in science and technology in the short run. In the long run, it will foster China’s independent science community, substantiate its civil society, gradually liberalize its politics, and in the process might even lead to scientific advances from which all of humanity can benefit. These are outcomes that advance the long-term interests of both America and China.

Recommended reading

George J. Gilboy, “The Myth behind China’s Miracle,” Foreign Affairs, no. 4 (2004): 33–48.

David M. Lampton, “How China Is Ruled,” Foreign Affairs 93, no. 1 (February 1, 2014): 74–84.

Yigong Shi and Yi Rao, “China’s Research Culture,” Science, no. 5996 (September 3, 2010): 1128–1128.

Ning Wang, “The Making of an Intellectual Hero: Chinese Narratives of Qian Xuesen,” The China Quarterly 206 (2011): 352–371.

Jingjie Yang, “Science Talent Program Eyes Noble Prize,” Global Times, October 31, 2013.

David Zweig and Huiyao Wang, “Can China Bring Back the Best? The Communist Party Organizes China’s Search for Talent,” The China Quarterly 215 (2013): 590–615.


Junbo Yu () is an associate professor in public policy at the School of Administration,Jilin University, in Changchun, China, and a research fellow at the Peking University-Fudan University-Jilin University Co-Innovation Center for State Governance.

From the Hill

From the Hill

Administration releases FY 2015 budget request

President Obama’s FY 2015 budget proposal totals $3.9 trillion, of which roughly 63% is mandatory spending such as Social Security payments, roughly 30% is discretionary spending, and the rest is net interest. By comparison, in FY 2010 the split was 55% mandatory and 39% discretionary.

The expected deficit is pegged at $564 billion, a decrease from last year. The budget matches the $1.014 trillion discretionary spending cap agreed to by Congress in December, although the president has proposed an additional $56 billion in discretionary spending on top of this cap, via what’s being called the Opportunity, Growth, and Security Initiative.

Total R&D funding would amount to $135.4 billion, an increase of $1.7 billion or 1.2% above FY 2014 levels. This also represents a $5 billion or 3.9% increase above FY 2013 sequester levels. This does not account for the expected inflation rate of 1.7% this year, which means that total R&D would actually decline slightly in inflation-adjusted dollars. Defense R&D would increase by 1.7% above FY 2014 levels, and nondefense R&D would increase by 0.7%. This represents a departure from recent budgets, which have tended to be more generous to nondefense R&D at the expense of defense R&D. Among the agencies, the largest increases would occur within the Department of Energy, particularly within the National Nuclear Security Administration. On the nondefense side, energy efficiency, renewable energy, and ARPA-E would also fare well, relatively speaking. The U.S. Geological Survey and the Department of Commerce R&D agencies would also receive relatively large boosts. Outside of these few, no other departments would keep pace with inflation.

Source: Budget of the United States Government FY 2015. Projected deficit is $564 billion. © 2014 AAAS

The Opportunity, Growth, and Security Initiative would provide an additional $5.3 billion for R&D. This includes nearly $1 billion for NIH, over $500 million for NSF, and nearly $900 million for NASA. The initiative would also fund a national network of 45 manufacturing innovation institutes in partnership with industry. However, all of this additional funding would require Congress to raise the current discretionary spending cap or make some attempt to secure this additional R&D funding through cuts elsewhere.

National Science Foundation (NSF). The FY 2015 budget request for the NSF is $7.25 billion, an increase of 1.2% above the FY 2014 estimate. Of that request, Research and Related Activities would receive $5.72 billion, a decrease of $2 million from the FY 2014 estimate. According to NSF acting director Cora Marrett, the administration’s request, if funded by Congress, would help to support 11,000 research grant awards to 2,000 institutions and support 300,000 individual researchers.

National Institutes of Health (NIH). The NIH budget request is $30.3 billion, an increase of $200 million above FY 2014. According to Kathy Hudson, NIH deputy director for science, outreach and policy, the request would allow the agency to award 4% (329) more grants than in 2014 and 13% (1,092) more than in 2013. NIH is requesting $100 million (an increase of $60 million) for the BRAIN initiative to support the development of new tools for mapping brain circuitry and to measure brain activity. In addition, Hudson noted that NIH is requesting $30 million within the Common Fund to launch a new research program modeled after the Defense Advanced Research Projects Agency (DARPA).

U.S. Department of Agriculture (USDA). The USDA request is $2.4 billion for FY 2015, an increase of $29 million (1.2%) above the FY 2014 estimate. According to Catherine Woteki, USDA under secretary for research, education, and economics, the request includes $1.14 billion for the Agricultural Research Service; $1.5 billion for the National Institute of Food and Agriculture, including $325 million for the competitive Agriculture Food Research Initiative (AFRI); $83 million for the Economic Research Service; and $179 million for the National Agricultural Statistics Service. In addition, Woteki announced that the USDA is proposing the creation of three “Innovation Institutes” to be funded at $25 million annually per institute to focus on research in three critical areas: pollinator health, antimicrobial resistance, and a national network for bioproducts manufacturing innovation.

Source: OSTP. Red line reflects inflation, expected at 1.7 percent. Does not include additional funding proposed via Opportunity, Growth, and Security Initiative. © 2014 AAAS

Department of Energy (DOE). The president requested $27.9 billion for DOE, which represents a 2.6% increase above the FY 2014 enacted level. The Office of Science, which houses most of DOE’s fundamental science, would receive a 0.9% increase from FY 2014 enacted levels to $5.1 billion. Outside the Office of Science, priorities include ARPA-E, which would receive a 16.1% increase to $325 million, and energy efficiency/renewables, which would receive a 21.9% increase to $2.3 billion. Meanwhile, reactor research, fossil fuels research, and grid cybersecurity research would all experience moderate decreases in funding.

National Aeronautics and Space Administration (NASA). The budget requests $11.6 billion for NASA R&D, a 1% decrease from FY 2014 levels. Within NASA, the science and the aeronautics directorates would both be reduced from FY 2014 levels, as would development activities related to the Orion Crew Vehicle and the Space Launch System. The science budget would decline by 3.5% to $5 billion, with only the heliophysics program receiving an increase. Aeronautics would decline by 2.7% to $551 million. Activities would receive budget boosts include exploration systems R&D and the space technology directorate. All programs would benefit from the proposed Opportunity, Growth, and Security Initiative.

Science, Technology, Engineering, and Math (STEM) Education. The president’s FY 2015 budget also includes a request of $2.9 billion for federal agency STEM education programs. In addition, it includes a range of program consolidations and eliminations within nine federal departments and agencies. Overall, the budget request would consolidate or eliminate a total of 31 STEM programs for a total savings of $145 million. It was noted during the White House Office of Science and Technology Policy press conference that the consolidations were implemented within the federal agencies rather than transferring programs between agencies as was proposed last year. In addition, agencies are to continue to “coordinate to implement the federal STEM Education 5-Year Strategic Plan through the Committee on STEM Education (CoSTEM).”

Special Programs. The president’s budget funds three long-running interagency initiatives. The U.S. Global Change Research Program (USGCRP) would receive $2.5 billion, a 0.5% increase from estimated FY 2014 levels. USGCRP is a multi-agency program that coordinates federal research on climate change and its potential effects. The Networking and IT R&D (NITRD) Program, which receives its largest contributions from the Department of Defense, DOE, and NSF, would receive $3.8 billion, a 2.9% decrease from FY 2014 levels. The National Nanotechnology Initiative would remain unchanged from the FY 2014 level of $1.5 billion.

The newest interagency project, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, would see its FY 2014 funding doubled to $200 million in FY 2015. The program coordinates research within DARPA, NIH, NSF, and the Food and Drug Administration to understand brain function and potentially develop more effective treatments or prevention measures for various diseases of the brain. NIH will contribute $100 million in FY 2015, DARPA will contribute $80 million, and NSF will invest an additional $20 million.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

FY 2014 appropriations finalized

On January 17, the President signed the Consolidated Appropriations Act of 2014 (HR 3547), providing appropriations for the remainder of the 2014 fiscal year. The bill operates within the framework established by the December budget deal, with the overall discretionary spending limit of $1.012 trillion that rolls back half of the scheduled spending reductions following sequestration.

AAAS estimates place FY 2014 R&D at $136.2 trillion, 2.6% above FY 2013 estimates and 4.4% below FY 2012 levels. However, defense and nondefense R&D will move in opposite directions. Defense R&D will decline by 1.6% below FY 2013 post-sequester estimates and 10% below FY 2012 levels, and nondefense R&D will increase by 7.6% above FY 2013 post-sequester estimates and 2.5% above FY 2012 levels.

Under the omnibus spending bill, many R&D departments and agencies received at least a modest increase above sequester levels, and some fared better than expected. For instance, the Department of Energy’s (DOE) Office of Science, science and technology programs at the National Aeronautics and Space Administration, and research programs at the Department of Agriculture all ended up closer to the higher spending levels recommended by the Senate than the lower House figures, and DOE’s low-carbon energy technology programs avoided the stark cuts proposed by the House. However, the National Institutes of Health (NIH) will remain roughly $700 million below FY 2012 funding levels, and the U.S. Geological Survey and Environmental Protection Agency also do not return to 2012 funding. Even with some positive outcomes for certain agencies, overall federal R&D could drop to 0.80% of the gross domestic product, the lowest level since the end of World War II.

Also embedded in the FY 2014 omnibus appropriations bill is language regarding public access to research and government travel restrictions. Within the Labor-Health and Human Services (HHS) portion of the bill is language that would require researchers who receive federal funding from agencies funded via Labor-HHS to make their accepted research manuscripts publicly available within 12 months. The language extends the NIH policy that has existed for a number of years to agencies such as the Department of Education and the Centers for Disease Control and Prevention.

Elsewhere within the omnibus bill is language that would place a cap of no more than 50 federal employees attending international conferences. In addition, it would lay out a series of reporting requirements to improve transparency regarding government travel, including information regarding total costs of government-sponsored conferences and contracting procedures.

Editor’s Journal: And Now for Something Completely Different

Editor’s Journal

KEVIN FINNERAN

And Now for Something Completely Different

The poet Muriel Rukeyser wrote that “The Universe is made of stories, not of atoms.” And for most people, she is right, even though the hardcore Issues reader might not be happy about it. We value the dispassionate logic of intellectual discourse, the relentless building of argument on the sturdy foundation of evidence. We discount the cheap appeal to emotion and the pathos of individual cases. We relish the opportunity to remind our readers and listeners that data is not the plural of anecdote. And if the anecdote is fictional, I mean, is it even worth considering?

Well, yes, unless we think the world is run by the characters on “The Big Bang.” Thankfully, we have not entrusted our well-being to the control of eggheads or technocrats. The wisdom of democracy is to recognize that we cannot put all our trust in bloodlines or academic credentials. What we now call the wisdom of crowds—or the advantage of index mutual funds—is the recognition that elites and specialists are not infallible, that the world is too complex for any individual or small group to comprehend completely. There is some collective wisdom that comes from diversity of experience and ways of understanding that experience.

The most powerful and persistent way of making sense of the human experience is fiction. Whereas the vast majority of scientific literature eventually loses its value as later research reveals its errors and limitations, we are still reading and learning from Homer and Virgil, Chaucer and Shakespeare, Austen and Woolf. This is not to say that all knowledge and wisdom can be found in fiction, but it is a recognition that we would be foolish to ignore the insights of the imagination and the perennial wisdom that can be revealed by focusing on individual characters in specific circumstances.

A significant amount of recent and contemporary fiction gives considerable attention to science and technology. This includes a large group of fine writers who are identified primarily as science fiction writers: H. G. Wells, Jules Verne, Arthur C. Clarke, Isaac Asimov, Ray Bradbury, Ursula K. LeGuin, Philip K. Dick, William Gibson, Neal Stephenson. Though often dismissed as a homogenous and undistinguished subgenre, these writers deal with a wide diversity of subjects ranging across the physical and social sciences and personas that stretch from technological Pollyannas to dystopian Cassandras. Another large group of celebrated authors—Aldous Huxley, George Orwell, Anthony Burgess, Margaret Atwood, Thomas Pynchon, Nobel laureate Doris Lessing, and many others—primarily for work not related to science have written works that can be classified as science fiction.

All of these writers have used their fiction to explore questions that are relevant to policymakers, and all have reached broader audiences and touched them more deeply than the analytic articles we publish in Issues. As editors we asked ourselves: Why should we let other publications have all the fun? More important, why should we ignore the ways that these writers influence our collective attitudes toward science and technology?

The obvious answer is that we shouldn’t, and we won’t. This edition includes the first of what we hope will be a long lineage of stimulating science fiction stories. We are particularly pleased that our first venture into fiction was written by Gregory Benford, who is not only a respected science fiction writer but has also taught astronomy and physics at the University of California. Read it now.

Book Review

Book Review

The view from nowhere

Close Up at a Distance: Mapping, Technology & Politics

by Laura Kurgan. New York: Zone Books, 2013, 232 pp.

Jason Lloyd

The global positioning system (GPS) technology incorporated into the vehicles, computers, smart phones, and other devices we use every day provides a convenience that would have been almost unimaginable two decades ago. The guesswork of map reading, the frisson of coming across something unexpected, the anxiety of being lost: for people embracing the perpetual orientation offered by GPS, these are concerns of the past. But what do you lose when placing yourself within a digital landscape, handing control and information over to the devices and organizations that govern this space? How does your relationship with the real, actual world change when it becomes a collection of indicators for orienting yourself within a virtual landscape, rather than vice versa? To what extent do these mapping technologies represent the interests and politics of their origins in warfare, surveillance, and military intelligence? What ideologies do they conceal beneath a veneer of certainty and precision?

These questions animate Laura Kurgan’s work in Close Up at a Distance. In exploring the ways these technologies alter our experience of the world and the consequences of using them—often unthinkingly—to navigate our environment, Kurgan makes use of mapping technologies in a series of nine projects. Ranging across such diverse subjects as war crimes in the Balkans and spatially concentrated zones of incarceration in Brooklyn, the projects seek to discover the politics embedded in the use of these technologies and to, according to Kurgan, “use these new technologies for good ends, rather than the militaristic ones for which they were invented.” The unexplored assumptions behind this blunt dichotomy hint at some of the weaknesses in this attractively assembled, often fascinating, ultimately frustrating collection of work.

Ways of seeing

Many of the technologies that are woven deeply into the fabric of modern society were initially developed to project and enhance military power. Global positioning and remote sensing satellites, and the software algorithms that makes sense of their data, are no exception.

The Department of Defense’s GPS program, known as NAVISTAR, began in 1973 and was operational in time to direct missile strikes and troop movements during the first Gulf War in 1991. Today the system, a network of two-dozen satellites and five ground stations, is operated and maintained by the U.S. government as a “free worldwide utility.” It is capable of pinpointing a device’s position to within 5 meters, and when augmented by other sources (the iPhone, for instance, makes use of nearby cell phone towers or WiFi networks) GPS can be accurate down to less than a meter. It enables a person to know precisely where a smartphone, vehicle, missile, or drone is located on the earth’s surface.

That information, however, would be difficult to understand without a map. Remote-sensing satellites, which take pictures of the earth’s surface from orbit, have greatly facilitated the ease and accuracy of modern cartography. In the 1960s, the Central Intelligence Agency’s Corona program inaugurated the use of reconnaissance satellites for intelligence purposes, and was quickly coopted for mapping missions. Like many clandestine Cold War initiatives, Corona bordered on cartoonish in its complexity: exposed film was dropped from orbit in a “bucket” with parachutes, meant to be caught in midair over the Pacific Ocean by a passing airplane. A salt plug would dissolve and sink the bucket within a couple of days, should the plane miss and Navy ships not find the film bobbing on the surface.

The satellite image is a representation, mediated by what we know and what we believe, and like all representations of reality, it is relentlessly political.

From this outlandish origin, remote sensing has followed a trajectory similar to that of other technologies—from a purely military and intelligence technology (spy satellites), to a resource supported by public funds (Landsat), to a commercial venture (QuickBird and GeoEye, for example). After 1992, the U.S. government allowed private companies to operate remote-sensing satellites and sell very high-resolution images. GeoEye’s second-largest customer for its images is Google (the largest is the U.S. government). This is a profound change in the creation and use of maps. Representations of territory have been a way for states, from the earliest city-states to contemporary nations, to lay claim to, or at least make legible, the space being charted. Private entities like Apple and Google now decide what, how, and to whom cartographic information is presented. While noting this, Kurgan does not explore the transformations entailed in this commercialization of remote-sensing technologies.

The latest generations of GPS and sensing satellites transmit data digitally. (No more film buckets dropped from orbit, unfortunately.) These data require interpretation, so geographic information services (GIS) software is necessary for interpreting and displaying spatial information in a useful way. Connecting data to geographic location can be enormously useful in determining, for instance, a user’s position on a map, troop movements in a foreign country, or the buying habits of a targeted demographic. One of the first examples of utilizing spatial data was British physician John Snow’s famous 1854 map of a deadly cholera outbreak in London, which inaugurated both the science of epidemiology and the use of social and geographic data to visualize what would otherwise be impossible to perceive.

Kurgan undertakes a similar task in her “Million-Dollar Blocks” project, which layers criminal justice data onto neighborhood maps to find city blocks on which the state has spent more than a million dollars incarcerating its inhabitants. What her maps show is the state’s investment in destitute areas of the city—not for development purposes but to lock up the people who live there. This is a brilliant appropriation of the GIS systems that police departments use to track crimes (such as CompStat in New York City), systems that enable the allocation of resources on incidents of crime while ignoring the underlying factors of concentrated poverty and isolation that contribute to these incidents.

Politics of representation

As the “Million-Dollar Blocks” project illustrates, Kurgan’s intent with the investigations collected in Close Up at a Distance is to repurpose mapping technologies for ends that belie their origins in state surveillance and military applications. She claims that “the history and politics of these technologies are at once obscure and important for understanding what’s at stake in working with them,” and goes further to argue: “For every image, we should be able to inquire about its technology, its location data, its ownership, its legibility, and its source.”

This agenda seems unfocused. A satellite image’s metadata may be of esoteric interest to the specialist, but to the average person this information is analogous to knowing the model of camera used to create a magazine photo or the network architecture of an email program. It is unclear what moral or political clarity that information can offer. Furthermore, can the militaristic ideology that originated a technology inhere in the technology itself? As mapping systems become increasing incorporated in nonmilitary public and commercial spheres with their own sets of political commitments, it appears that the use rather than the origin of these technologies sets the political stakes.

Of greater concern then, and where Kurgan’s investigations are most valuable, are the ideological ends that these technologies’ outputs are marshaled to support. The fact that the maps, images, and spatial data produced by mapping technologies are often presented as objective and irrefutable makes a critical appreciation of these technologies essential. “The facts speak for themselves,” according to Colin Powell’s infamous formulation, made while he argued the case for war using satellite images of Iraq’s purported weapons factories before the U.N. Security Council. But the “facts” of satellite images most certainly do not speak for themselves.

Images are never merely an objective, “mechanical record” of the world. Painting, for example, which until the early twentieth century was nearly always representational, is a highly subjective medium, and obviously so. The subject matter, composition, and style are chosen by the artist, and represent his or her interpretation of the world. This is equally true of photographs, although their subjectivity becomes more difficult to discern because, as Susan Sontag noted, photographs “do not seem to be statements about the world so much as pieces of it.” Untouched by human hands, the veracity of images encoded to a magnetic disk in the guts of an orbiting satellite seems absolute. But their very distance from human affairs—what makes them seem objective—means that satellite images require analysis and interpretation to make them intelligible. The view close up at a distance, carefully selected and analyzed, is a screen on which a particular version of the world can be projected. The satellite image is a representation, mediated by what we know and what we believe, and like all representations of reality, it is relentlessly political.

Kurgan fully recognizes this, and offers critical appraisals of images produced for what she calls “militaristic ends.” But her assessment lacks the same clarity when presenting her own images, the “good ends” of the dichotomous moral framework she has set up. Her unexamined assertions are most clearly on view in the “Monochrome Landscapes” project. These photographs from QuickBird and Ikonos satellites are of four landscapes characterized by the colors white, blue, green, and yellow: the Arctic National Wildlife Refuge in Alaska, the Atlantic Ocean off the west coast of Africa, old-growth rain forest in Cameroon, and the Southern Desert in Iraq.

The resolution of these satellite images is such that one can see, for example, logging roads cut into the Cameroonian forest. Kurgan notes that the forest “has a simple aesthetic—a detailed and undulating green forest, seen from above, whose beauty is interrupted by a road that looks almost natural, simply a part of the landscape. But it is new, not natural, and demands that a viewer ask questions about it.” It is clear from Kurgan’s phrasing that this image is intended to prompt an awareness of the way the human presence diminishes—has “interrupted”—the beauty of the natural world. But hers is by no means a self-evident conclusion: the fact of that road does not speak for itself, and the idea that a road is “not natural” is normative and subjective. The verdant forest canopy may indeed be beautiful from above, but the people at ground level making use of the forest’s resources via that road cannot share in Kurgan’s aesthetic perspective.

The temptation to omit competing responses to these kinds of images may be, in part, a function of the vantage that remote-sensing technologies enables. As anyone who has ever tried to guess at the purpose of structures seen from an airplane window knows, the view from above is disorienting. It requires interpretation, and when this is not acknowledged—when images are presented as objective, unpolluted by human interests because produced by a machine in orbit—the unseen interpreter has imposed his or her own view of reality on another’s. It thus offers political opportunities. The environmentalist Stewart Brand, for instance, called for an image of Earth from space to be made public in order that viewers would recognize the photograph as “visual proof of our unity and specialness, as our luminous blue ball-of-a-home contrasted dramatically with the dead black emptiness of space,” according to Brand’s colleague Robert Horvitz.

Is this utopian vision of our fragile blue planet—a view that by necessity omits the human, the political, the contested—really so distinct from the militarized perspective of a landscape as a place to be monitored and controlled? Both use technology to depopulate the visible space for other, higher purposes, and in removing people from the frame they become totalizing visions, “visual proof ” of the rightness of a particular worldview. To quote Vilém Flusser, the Czech-born philosopher and writer, humans have made these images to orient themselves in the world, but because “they are no longer able to decode them, their lives become a function of their own images.” The view close up from a distance demands a more rigorous critique in order to avoid simply privileging one ideology over another. Kurgan offers only a partial, albeit often compelling, decoding of these images.


Jason Lloyd () is a project coordinator at Arizona State University’s Consortium for Science, Policy, and Outcomes in Washington, DC.

Choosing a future

The Second Machine Age

by Erik Brynjolfsson and Andrew McAfee. New York, NY: W. W. Norton, 2014, 320 pp.

Robert D. Atkinson

The past several years have witnessed a lively debate about innovation between techno-pessimists and techno-optimists. The pessimists’ view—exemplified by work such as Peter Thiel’s What Happened to the Future, Robert Gordon’s The Demise of U.S. Economic Growth, and Tyler Cowen’s The Great Stagnation—is that the days of robust U.S. innovation and productivity growth are over, in part because most of the low-hanging fruit has already been picked. Gordon, for example, asserts that “there is no need to forecast any slowdown in the pace of future innovation for this gloomy forecast to come true, because that slowdown already occurred four decades ago.” For him, “medical research, small robots, 3-D printing, big data, and driverless vehicles” are marginal extensions of past technologies which will do little to drive future growth.

Confronting the innovation pessimists are the innovation optimists, exemplified by, among others, Peter Diamandis and Steven Kotler’s Abundance: The Future is Better Than You Think and Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. For Diamandis and Kotler, emerging technology is so powerful that “within a generation, we will be able to provide goods and services, once reserved for the wealthy few, to any and all who need them.” They don’t just mean abundance in the developed world, but in the entire world. There’s only one problem with their utopian claim: To reach their projected income levels the world would have to experience productivity growth of 25% a year for the next 25 years, up from the 3.5% average of the past 25 years.

Like Diamandis and Kotler, Bryn-jolsson and McAfee are similarly utopian, arguing that the “second machine age” (the first one was the Industrial Revolution in Great Britain) is “doing for mental power…what the steam engine and its descendants did for muscle power. They’re allowing us to blow past previous limitations and taking us into new territory.”

Clearly one (or both) of these camps must be wrong. We can’t be simultaneously facing stagnation and surge. What this points to is the difficulty in accurately describing the “innovation elephant.” Optimists see the parts that are accelerating and driving change (e.g., our smart phones) and extend them to the entire economy while extrapolating current trends forward. Pessimists see parts of the innovation system that are “stuck” (e.g., much of the personal and knowledge services economy) and assume that this not only describes the entire economy but will not change going forward. The reality is that neither view is right, because some parts of the innovation system are driving rapid change, whereas others are relatively stagnant.

Brynjolfsson and McAfee (B&M) in particular assume that virtually all parts of the innovation system are vibrant and accelerating. They assert that innovation will accelerate at an exponential rate because of three factors: continued exponential advances in computing power, pervasive digitization, and the combinatory nature of innovation. For them, these three factors are enabling transformative tools that will replace large amounts of work currently done by humans, including knowledge work, and this transformation will be on the scale of the first Industrial Revolution.

There are however, major flaws in their framework. First, it’s not clear that Moore’s law will continue to be true. Gordon Moore’s revolutionary prediction in 1965 that the number of transistors on a chip would double every 12 to 18 months (and thus would computer processing speeds) has proven prescient. Indeed, over the past 40 years, processing speeds have increased over 1 million-fold, unleashing a wave of innovation across industries. But possibly as soon as 2020, the dominant silicon-based CMOS semiconductor architecture will probably hit physical limits (particularly pertaining to heat dissipation) that threaten to compromise Moore’s law unless a leap can be made to radically new chip architectures. But B&M devote no attention to this critical issue, blithely assuming that semiconductor past is prologue.

Second, after asserting that Moore’s law will continue—not just in semiconductors, but in all areas of digital technology—they argue that we are experiencing “the digitization of just about everything.” In other words, not only are digital technologies improving exponentially, but more and more areas of the economy are becoming digital. For them this matters because “when things are digitized…they acquire some weird and wonderful properties. They’re subject to different economies, where abundance is the norm, rather than scarcity.”

Although it is true that digital technologies are reshaping traditional industries, including transportation, manufacturing, education, and health care, this does not mean that bits will replace all atoms or genes. Food won’t be digitized. Manufactured goods, although increasingly sold online and made with digitally enabled technologies, will still be made of atoms. What B&M are really referring to is digitized information, where abundance is real because digital goods are nonrivalrous, meaning I can enjoy them without that coming at your expense. This counterintuitive property, however, applies to probably less than 5% of the economy, and certainly not to activities such as making cars or waiting on tables, where scarcity and rivalry are the rule.

Third, B&M assert that innovation is speeding up because the possible combinations of innovations are increasing, as is our ability to combine the ingredients. For them, innovation is easier in the digital era because of the possibility to recombine “recipes” and test them, what they call “recombinant innovation.” They claim that the “number of potentially valuable building blocks is exploding around the world.” Growth is being held back only by our inability to process all the new ideas fast enough. In short, they argue that the “second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world.”

But this is a simplistic view of the process of innovation that likens it to random recombinations of elements, akin to having a million monkeys on typewriters, hoping that one will write a Shakespeare play. If this were true, the rate of innovation should have sped up over the past 100 years as more building blocks were created and more people were at work combining ingredients. But innovation is no faster today than it was in the late 1800s. In fact, it appears that innovation is getting much harder, because the problems to solve are so much tougher, and the only thing keeping us from suffering an innovation drought is the increased global resources going to R&D.

This leads them to perhaps their largest misreading of the future, one that is shared by many futurists speaking at corporate confabs, TED-talk pundits, and pretty much everyone who works at Silicon Valley’s Singularity University: the notion that technical progress is improving “exponentially.” If innovation is actually improving exponentially every few years, this would suggest that a decade from now, the U.S. Patent Office should be issuing 4.4 million patents a year, up from the 542,000 in 2013 (the exponential rate of increase). I can’t wait.

Finally, they overstate the extent to which digital innovation is transforming occupations. For them, virtually all jobs will be disrupted by smart machines. A closer look suggests otherwise. In a back-of-the-envelope analysis of U.S. occupations, the Information and Technology Innovation Foundation came up with a roughly 20-50-30 split among jobs that are moderately difficult to automate, difficult to automate, and very difficult to automate. In other words, only about 20% of total U.S. jobs are likely to be easily automated over the next decade or two.

Despite the utopian future that B&M suggest is waiting for us, at least in terms of innovation, they are surprisingly pessimistic about the effects, warning of all sorts of dystopian results, the principal one being massive unemployment and income inequality. They have backed off somewhat from their more extreme claims made in their ebook The Race Against the Machine, in which they claimed that the second machine age would cause massive unemployment. But they still raise the fear flag, arguing that “as computers get more powerful, companies have less need for some kinds of workers.”

But their logic is fundamentally flawed. For example, after pointing out that since the year 2000, productivity and employment are no longer growing apace, they assert that this disjuncture is evidence that productivity kills jobs. But not only has productivity not accelerated after 2000, there is simply no logical reason why growth in the labor force and growth in productivity would be related.

And although they acknowledge that technologically driven productivity would reduce prices, which should enable consumers to purchase more of other goods and services, thereby employing more workers, they dismiss this possibility. Without any evidence or logic, they claim that consumers will be satiated and not want to consume more even if their disposable incomes go up. I don’t know about MIT professors, but the average U.S. family with a household income of around $50,000 would be ecstatic if higher productivity doubled or even tripped their real incomes and would easily find things to spend it on.

The most disturbing aspect of B&M’s argument is that it might lead policymakers to conclude that their job should be to slow down innovation-driven productivity growth. B&M argue that “we can do more to invent technologies and business models that augment and amplify the unique capabilities of humans to create new sources of value, instead of automating the ones that already exist.” They advocate that government award prizes for technologies that don’t replace labor. They want to start a “made by humans” labeling movement. And since technology will destroy jobs and create a massive new lumpenproletariat sitting at home with nothing to do, they advocate for a slew of redistributionist, rather than growth, solutions, including a negative income tax, an expanded Earned Income Tax Credit, and a national mutual fund that provides dividends for everyone.

The excesses of the techno-optimists do not mean, however, that the techno-pessimists are correct in asserting that we can no longer expect much benefit from innovation. Given the slow growth in U.S. productivity over the past decade and the large expected rise in retirees over the next quarter century, the most important thing policymakers can do is support innovations that “automate the jobs that already exist.” Stoking neo-Luddite fears of technology-induced joblessness is a step in the wrong direction.


Robert D. Atkinson () is president of the Information Technology and Innovation Foundation in Washington, DC.

Steal this book

Open Access

by Peter Suber. Cambridge, MA: MIT Press, 230 pp.

Paul F. Uhlir

In 1971, Abbie Hoffman mischievously named his first book-length screed Steal This Book, and founded a publishing company, Pirate Press, because no existing publisher would touch it. It was a countercultural manifesto against the “pig” establishment. The networks—CBS, NBC, and ABC, plus the upstart Fox—were “evil corporate conglomerates” spewing capitalist lies.

Flash forward four decades, and the inmates are running the asylums. Establishment journals, including Issues in Science and Technology, publish their content freely and openly online, inviting readers to “steal” their articles. The old TV networks still exist, but the one that really matters is the global Internet, where free information rules. The born-digital generation regularly pirates copyrighted content or at least expects information to be free and instantaneous. And there are even political “Pirate” parties formally established in several countries in the European Union.

Outraged? Outsourced? Or just curious? Peter Suber’s book Open Access provides an easy-to-read compendium of answers to many questions and blows up some of the canards that have been flying around the ether. Suber is one of the gurus of the open access (OA) movement. His Open Access News blog was for about eight years the place to go each month to find out what was happening in OA publishing, worldwide. Unfortunately, Suber discontinued his valuable service in 2010, but this book summarizes what he learned during that time.

There are basically two approaches to OA. One is the “green” road, which depends on the author of the manuscript to deposit it in an institutional or discipline repository, or less preferably on the author’s Web site, either immediately upon publication or within some prescribed embargo period of perhaps a year. The other is the “gold” road, which is the OA publishing model itself. There are now over 9,000 such journals registered in the Lund University Directory of Open Access Journals (www.doaj.org) in Sweden, and the number is growing as even many of the subscription legacy publishers are launching fully OA or hybrid journals.

The green approach is relatively easier to do but suffers from author inertia and a lack of mandates, the general absence or existence of which is a matter of great controversy. According to the Registry of Open Access Repositories at Southampton University (roar.eprints.org) in the United Kingdom, there are now at least 2,800 formal repositories for authors to deposit their manuscripts in throughout the world, although many authors post or share their works informally online anyway.

The gold approach requires significant effort on the part of a publisher to establish. It also may cost the author a substantial sum to publish an article, despite there being many other creative ways to finance such journals, including consortia, institutional or government subsidies, volunteerism, advertising, or some combination of them. The gold publication model has been slowly but steadily gaining in market share, with perhaps one-quarter of all scholarly journals now published in some OA form.

Suber covers both of these main models in Open Access and has written the book for both the uninitiated and the unconverted. For those who are not well versed in what OA is all about, he describes the elements of the models and the varieties within them in workmanlike fashion. He addresses the scope and the policies, the economic and copyright aspects, and who are the beneficiaries and the casualties. And he concludes the book by looking briefly into the future and providing some self-help references for those willing to take the next step.

However, it is also a book for the undecided. Suber makes a strong argument for the openness of publicly funded research writings throughout the manuscript, and in particular debunks some of the myths about OA.

So, for instance, the real purpose of OA is to provide the broadest possible access and use, not to remove quality, as the antagonists assert. There is not less attention given to peer review than in more established subscription journals. The fact that many subscription journals have a higher citation index frequently is due to their having been in business for a much longer time.

Because many bona fide gold-road journals are funded by author payments, they have been accused of preying on the poor researcher or for taking any money that they can, without an adequate filter. But most subscription journals also have additional charges, and there is no monopoly on greed. Many gold road journals have reduced charges or are free to authors who successfully plead poverty. Furthermore, an author has many different publishing outlets and business models to choose from, so there is no attempt to reduce academic freedom.

OA does not lead to more plagiarism, nor is it a vehicle to relax the rules against it. In fact, the more eyes that can see the work, the less likely it is to be plagiarized successfully. Nor is it a war on copyright or a way to subvert it; it is actually the subscription journal publishers who insist on transferring copyright to them without any payment to the author, thereby capturing the product as well as providing the service.

There is no organized attempt to punish or undermine conventional publishers, to deprive authors of royalty earnings (in the case of books), to boycott certain publishers, or to destroy the whole public research system. What OA does do is shift the cost burdens of scholarly publishing, lower those costs, and take advantage of the attributes of the Internet to make publicly funded research broadly accessible and usable. In summary, Suber dispels the arguments against open publishing of publicly funded research results and makes a cogent case for the new models.

There are a few things that this book does not try to do. It is not a scholarly treatise that looks in depth at the various intricacies of OA activities. There are plenty of those books and articles already. For those interested in starting up an OA journal, it is not a tutorial about how to do that, although it provides a very good background rationale for doing so. Nor is it likely to change the minds of those firmly against OA publishing.

This last point is worth some further observations. On one side are the stakeholders of the ancien regime: the legacy publishers that reap large profits from the public purse, and the professional societies that subsidize their various other member programs with generally more modest income streams from their journals. These stakeholders, especially the commercial publishers that largely cornered the artificial academic publishing market over the past few decades, have a lot to lose from ceding this cash cow.

On the other side are mostly small, upstart publishers; groups of researchers in different disciplines, many in the besieged library community; and individual provocateurs. These groups and individuals are either leading by doing or are tirelessly analyzing and writing about why OA is the better option. Most of them are in the game for the principle, not the money. They are gradually winning the argument with the powers that be—the national legislatures, the research funders, the university administrators, and the public at large—but it is a slow process. Nevertheless, the OA advocates also have their differences and factions favoring green, gold, or some other flavor of OA. The internecine warfare can be as intense as with the legacy publishers.

In short, this can be characterized as mostly a generational and “religious” conflict with vested interests, more akin to a 30-year war than a rational discourse about publishing business models. A single book, such as Suber’s Open Access, will not change such hearts so easily, despite marshaling all the arguments to change some minds.

So, if you are one of those who is uninformed about the OA movement, or just sitting on the fence, you should “steal” this book. It is available freely and openly at bit.ly/oa-book. Or you may choose to subsidize the work and purchase a paperback copy from MIT Press for about $14.


Paul F. Uhlir () is director of the Board on Research Data and Information at the National Research Council.

Archives

Archives

ARCHIVES

University of Texas at Dallas professor John Pomara’s work reflects an interest in the role that human error plays in technology, focusing primarily on the current state of painting and picture making with the rise of new media and digital technology. Pomara explores and formats computer stenciling of magnified digital images. These pictorial distortions are then painted in an analog fashion, pulling industrial enamel paints across aluminum surfaces. By creating these abstract paintings of blurs, glitches, and printing imperfections, Pomara challenges the commonly held belief that modern technology is cold, rational, and without error.

Image courtesy of the artist and Barry Whistler Gallery, Dallas, Texas.

Anticipating a Luddite Revival

Anticipating a Luddite Revival

STUART W. ELLIOTT

Anticipating a Luddite Revival

Advances in information technology and robotics are already transforming the workplace, and even greater changes lie ahead. Here’s a look at what the next two decades might bring.

Even as computer-based consumer products have transformed our leisure and social lives over the past decade, information technology (IT) and robotics suggest a transformation of work that might be even more far-reaching. Some observers, including many workers, see this vision as inherently threatening.

Economists, however, have repeatedly argued that technological advance is central to economic growth and that workers displaced by technology in one sector will be absorbed in another. Of course, this process of adjustment takes time, and the economic arguments about long-term adjustment can seem particularly hollow during prolonged recessions. During these periods of lower economic activity, such as the slowdown the United States has recently been experiencing, displaced workers might find it difficult to move into new positions. Still, economics has provided a compelling model of the adjustments of the labor market to technological change, and the historical record has repeatedly demonstrated that the fears about substantial portions of the workforce being permanently displaced from work are unjustified.

But are economists and history right today? The nature of recent technological change suggests that the adjustments that were possible in the past might not continue to take place. Over the past few years, a new appreciation has emerged of the wide range of computer capabilities that are becoming available. In turn, these new capabilities suggest a broad range of occupations that could begin to see workforce displacement resulting from the applications of IT and robotics, including occupations in fields involving high levels of pay and expertise, such as medicine and law.

To help gain a better understanding of how such displacement might play out, I recently investigated the range of IT and robotics capabilities that could conceivably affect the workforce over the next few decades. The results to date are only suggestive, but they point the way toward more serious work that needs to be done in the coming years to understand the growing implications of IT and robotics for the workforce.

The exploratory study was motivated by two simple arguments about the possibility of understanding the implications of future technological change for the workplace.

The first argument is that the match between the new computer capabilities that are ready to be applied in the workplace and the capabilities currently being used by workers in different occupations is likely to be a useful guide to the occupations that will be most affected by new technology. So, for example, if computers have capabilities in speech recognition and simple reasoning, it is reasonable to expect that those capabilities can be combined to carry out some of the tasks of telephone operators and receptionists, as has been the case over the past few decades.

Of course, the technique of looking at the match between computer capabilities and occupational skill requirements is hardly foolproof. For one thing, we may overlook important skill requirements for some occupations, such as the substantial range of common-sense knowledge that enables a receptionist to reply sensibly when a customer makes an entirely unexpected request. For another, we may overlook the opportunities for reengineering a task to mechanize it in a way that uses different capabilities than those used by people, such as when the cotton gin replaced the detailed finger movements used by people to remove seeds from cotton fibers. Despite these challenges, however, the match between computer capabilities and occupational skill requirements provides a reasonable starting point for considering what jobs might be affected.

The second argument is that we can see new IT and robotics capabilities demonstrated in the research literature long before they are broadly applied in the workplace. Research has shown that such diffusion lags can often be several decades. Even in the fast-paced area of IT, where technology is being rapidly developed and applied, a straightforward application such as electronic invoicing can require decades to be fully adopted. The reason, of course, is that the adoption of new techniques usually requires substantial investment, as well as learning and adjustment by many people who are accustomed to using an existing system. In addition, many times the research techniques need to be refined before they can be applied commercially, or they might need to become cheaper or faster before their application is practical. Thus, although it is possible to use the research literature on computer capabilities to look several decades into the future of IT and robotics applications, it is not easy to predict when a new capability will be widely used.

Even with these caveats, it is worth considering how these emerging capabilities compare with work skills in different occupations and how they might affect work.

Assessing current skill sets

To gauge the current capabilities of research systems in IT and robotics, my investigatory study sampled articles from two journals, AI Magazine and IEEE Robotics & Automation Magazine, over a period of 10 years, from 2003 to 2012. Both journals publish articles that reflect a wide range of specialized research related to the capabilities of IT and robotics, and the articles are sufficiently technical to describe capabilities in some detail without being so technical as to be difficult for a nonspecialist to understand. Collectively, the articles presumably describe the IT and robotics capabilities that are currently seen as noteworthy; that is, they describe capabilities that are just now becoming feasible for IT and robotics systems to carry out with enough success that there are promising results to report, but sufficiently novel to be interesting to report. To guard against excessive techno-optimism, the study specifically looked for limits on the capabilities of the systems. Often these limits are described in terms of constraints on the range of topics covered or the complexity of the context in which the tasks are carried out.

In sum, the set of capabilities observed can be considered to define the rough limits of what has been demonstrated in the research literature, and that can form the basis for practical applications over the next few decades.

To bring some order to this mass of information, the study separated the capabilities into four general areas: language, reasoning, vision, and movement. Each of these areas can be compared to human capabilities, and each includes a collection of different but related capabilities that together provide the full range of competences that people typically have. Although the review focused separately on the four different areas, there was substantial overlap in the systems identified, because many of the systems integrate capabilities from several of the four general areas of capability.

Language capabilities. Fifteen articles described systems demonstrating language capabilities, which ranged across four specific aspects of language: understanding speech, speaking, reading, and writing. The systems involved a diverse range of tasks.

In the articles from the first five years (2003 to 2007), the tasks included detecting problematic text in an insurance application, providing customer service for sales and repairs, explaining the answers to chemistry questions in an advanced high-school test, describing the movement of cars in a video of a traffic intersection, translating car assembly instructions, asking for help in finding the registration booth at a conference, giving a conference talk that included questions from the audience, and role-playing with students in a training simulation about how a military officer should handle a car accident with a civilian.

In the articles from the later five years (2008 to 2012), the tasks included screening medical articles for inclusion in a systematic research review, solving crossword puzzles with Web searches, answering Jeopardy questions with trick language cues across a large range of topics, answering questions from museum visitors, talking with people about directions and the weather, answering written questions with Web searches, following speech commands to locate and retrieve drinks and laundry in a room, and using Web site searches to find information to carry out a novel task.

The length and complexity of the language that these systems could handle often corresponded roughly to a few pages of written material. Of course, text length is only a crude way of gauging language complexity—a short poem or technical argument can be quite difficult to understand—but it does provide some sense of the language capabilities of the systems. They have advanced beyond the challenges of typical language use at the word or sentence level, but they fall far short of typical extended language use at the lecture or book level.

One important aspect of language involves adjusting to the needs of the person who is being communicated with and the requirements of the situation. Several of the example systems exhibited this kind of sensitivity, including the ability of the conference-talk system to monitor its points and not repeat them, and the ability of the training-simulation system to reason about emotion in order to choose appropriate language and understand imprecise language.

Considered over time, the articles showed some progression in the range of topics addressed by the different systems. In the first half of the period, all of the systems focused on language use within a single topic area. In contrast, a number of systems described in the later articles attempted to deal with an unlimited range of topics by tapping into a range of source material available on the Web.

Reasoning capabilities. Twenty-one articles described systems demonstrating reasoning capabilities. The systems showed a number of different aspects of reasoning, including recognizing that a problem exists, applying general rules to solve a problem, and developing new rules or conclusions.

In the articles from the first five years, the tasks addressed included making underwriting decisions about long-term care insurance, providing customer service for sales and repairs, developing new hypotheses about good conditions for growing crystals and for recovering from medical disability, helping diagnose appliance problems, providing useful analogies for solving problems in physics and in military tactical games, providing answers and explanations to chemistry questions in an advanced high-school test, developing novel atomic models for electron-density maps of proteins, identifying patterns of potentially suspicious facts that could indicate a terrorist plan, resolving problems related to scheduling and project coordination, role-playing with students in a training simulation about how a military officer should handle a car accident with a civilian, and driving a vehicle on different types of road.

In the later articles, the tasks included screening medical articles for inclusion in a systematic research review, processing government forms related to immigration and marriage, solving crossword puzzles, playing Jeopardy, answering questions from museum visitors, analyzing geological landform data to determine age, talking with people about directions and the weather, answering questions with Web searches, driving a vehicle in traffic and on roads with unexpected obstacles, solving problems with directions that contain missing or erroneous information, and using Web sites to find information for carrying out novel tasks.

One of the striking aspects of the reasoning systems was their ability to produce high levels of performance. For example, the systems were able to make insurance underwriting decisions about easy cases and provide guidance to underwriters about more difficult ones, produce novel hypotheses about growing crystals that were sufficiently promising to merit further investigation, substantially improved the ability of call center representatives to diagnose appliance problems, achieved scores on a chemistry exam comparable to the mean score of advanced high-school students, produced initial atomic models for proteins that substantially reduced the time needed for experts to develop refined models, substituted for medical researchers in screening articles for inclusion in a systematic research review, solved crossword puzzles at an expert level, played Jeopardy at an expert level, and analyzed geological landform data at an expert level.

However, common-sense reasoning has historically been more difficult for IT systems to demonstrate. The articles from the first five years were consistent with the historical contrast, showing high levels of reasoning within narrow areas of specialized expertise but no evidence of the broad and more flexible reasoning that is typical of human common sense. But during the later period, there were examples of systems that used information from the Web to reason across a broad range of areas.

TABLE 1

Vision capabilities. Twenty-two articles described systems demonstrating vision capabilities. These include systems that recognized objects and different features of those objects, including their position in space.

In the articles from the early years, the tasks of the systems included locating a soccer ball and other soccer players, identifying cars and their movements in a video of a traffic intersection, finding the registration booth and several rooms at a conference, identifying drivable surfaces and obstacles for an autonomous car, determining the location of a ping-pong ball, guiding autonomous vehicles to move shipping containers, identifying people and obstacles in a crowded museum, locating pallets in a factory, recognizing objects in cluttered environments, guiding a robot to grasp irregularly shaped objects such as lettuce, and identifying vehicles on a road to provide driver assistance.

In the later articles, the tasks included recognizing chess pieces by location, rapidly identifying types of fish, recognizing the presence of nearby people, identifying the movements of other vehicles for an autonomous car, locating and grasping objects in a cluttered environment, moving around a cluttered environment without collisions, learning to play ball-and-cup, playing a game that involved building towers of blocks, navigating public streets and avoiding obstacles to collect trash, identifying people and locating drinks and laundry in an apartment, and using Web sites to find visual information for carrying out novel tasks such as making pancakes from a package mix.

All of the systems involved identifying various—and diverse—objects, and they all also involved recognizing features of the identified objects, particularly their location and movement.

Movement capabilities. Seventeen articles described systems demonstrating movement capabilities. These included systems that involved spatial orientation, coordination, movement control, and body equilibrium. Many of the systems integrated movement capabilities with capabilities in one or more of the other three general areas of capability.

In the early articles, the tasks of the systems included walking, kicking a ball, passing a ball between two robots, moving down a hallway, following a map to locate a meeting room in a hotel, using an elevator, driving a car in the desert, playing ping-pong, autonomously moving shipping containers, navigating around people and pursuing objects in a crowded museum, moving pallets autonomously in a factory, and grasping irregularly shaped objects such as lettuce.

In the later articles, the tasks included moving chess pieces, driving a car in traffic, grasping objects in a cluttered environment, moving around a cluttered environment without collisions, learning to play ball-and-cup, playing a game that involved building towers of blocks, navigating public streets and avoiding obstacles to collect trash, retrieving and delivering drinks and laundry in an apartment, and using the Web to figure out how to make pancakes from a package mix.

Comparing capabilities and work skills

With these examples of IT and robotics capabilities, we can then look at the skills required in different occupations to see how they compare. To make this comparison, the study used the U.S. Department of Labor’s O*NET system, which provides ratings for hundreds of occupations on many different features. The feature set includes ratings for a number of ability scales that are related to the four general areas of capability discussed above.

O*NET uses a seven-point scale to rate the level of requirement for each ability for each occupation, with anchoring tasks for the ratings provided for levels 2, 4, and 6. The study used these anchoring tasks to provide concrete descriptions of the different levels of capability required for different jobs throughout the economy.

To focus on the big picture, the study grouped together all of the different abilities into two cluster ratings: one focused on language and reasoning, and the other focused on vision and movement. For each occupation, the highest rating across the different abilities was used as the rating for each of the two clusters.

TABLE 2

TABLE 3

Table 1 shows the distribution of employment in the economy for the different capability combinations, using the O*NET rating scales. (The table omits level 7 on the rating scale because there are so few jobs that require that level of skill.) The table makes clear that the vast majority of current jobs—roughly 81%—can be carried out with a combination of abilities at the O*NET level of 4.

So a crucial question for assessing the likely impact of IT and robotics capabilities on work over the next few decades is how those capabilities compare with this middle level of ability on the rating scale for workers. Table 2 contrasts some sample IT and robotics tasks drawn from the study with some anchoring tasks from the O*NET rating scales across the four general areas of ability.

Comparing the sample tasks for the IT and robotics research systems with the anchoring tasks for the different O*NET levels shows that the IT and robotics systems are solidly in the middle range of ability levels across all four general areas of ability. In each case, there are clear ways that the IT and robotics capabilities fall short of the higher levels of human performance, but the capabilities that are typical of level 4 on the O*NET scales appear to be roughly comparable to the types of tasks now being described in the research literature.

To help think about the relation between the various research tasks and the actual workplace, Table 3 shows the average O*NET levels for broad occupational groups, along with their portion of total employment. At the top of Table 3, sales occupations represent 11% of employment and involve a medium level of language and reasoning skills, but generally low levels of vision and movement. The nation has already seen some replacement of sales jobs with technology in the extensive use of the Web for retail, along with the use of self checkout in stores.

Currently, the level of interaction provided by such sales-related technology is low, but the research systems show capabilities that would allow more helpful interactions. Some of the research systems specifically provided interactions related to customer service, as well as related tasks such as answering questions from museum visitors or giving people directions. Some of the reasoning systems provided underlying analytic capabilities that could extend the kinds of transactions that can be carried out without a person, including processing government forms, using Web sites to find information, making insurance underwriting decisions, or diagnosing appliance problems.

It is possible to imagine how the ease and range of interaction and the depth of analysis of sales-related computer systems can be steadily extended over time to add many functions of current sales occupations. Future systems will be able to use regular language with customers to understand what they are looking for and to suggest possible solutions.

The middle section of Table 3 includes the large number of occupational groups involving both a medium level of language and reasoning skills and a medium level of vision and movement. The language and reasoning skills for many of these jobs are similar to those for the sales occupations just discussed. It is easy to imagine, for example, that the interaction and analysis that will make it possible to extend the capabilities of sales-related computer systems will also be applicable to the capabilities of administrative systems, where there is often an interaction with an internal customer.

As a contrast, it is useful to consider one of the occupational groups in the middle section that involves an extensive role for vision and movement. Physical movement is important for jobs in construction, maintenance, and production, as well as for jobs in food and personal service. These two large occupational groups represent 30% of current employment.

The use of automatic machines for performing physical movements has been key to the substantial improvements in manufacturing productivity over the past century. The research systems in vision and movement suggest how the high levels of performance that have been demonstrated in factories will begin to be extended into the more complicated settings where construction, maintenance, and food and personal service tasks are carried out. Some of the example systems directly involve maintenance or food service tasks, such as the system that moves around public streets to collect trash, the system that delivers drinks to people in an apartment, the system that can grasp irregular objects such as lettuce, and the system that can identify different types of fish.

One can imagine how the automatic capabilities that have been applied in factories can be extended into less controlled work settings over time. The Roomba (a robotic vacuum cleaner) could evolve into more extensive cleaning capabilities, and robots could be deployed for food preparation outside of factories. This will be a continuation of the automation that has taken place in factories, but it will be taking place in increasingly less controlled workplaces as robots become more flexible.

At the bottom of Table 3, there are three occupational groups involving a high level of language and reasoning skills and a medium level of vision movement. The jobs in these groups are ones that often involve higher levels of language and reasoning skills that are likely to be beyond the capabilities of the research systems.

Implications for work and the economy

This exploratory study suggests that there is the technological potential for a massive transformation in the labor market over the next few decades. It will clearly take time for the capabilities that have been roughly demonstrated in the research literature to be refined and broadly applied to the many different types of work. However, even a diffusion period of several decades is relatively short for adapting to a change of such magnitude.

In principle, there is no problem with imagining a transformation in the labor market that substitutes technology for workers for 80% of current jobs and then expands employment in the remaining 20% to absorb the entire labor force. Considering the contrasts across the occupational groups in Table 3, such a change might involve a drastic reduction in sales, management, administration, construction, maintenance, and food service work accompanied by a massive expansion in health care, education, science, engineering, and law.

The United States experienced a labor market transformation of this scale between the early 19th and the late 20th century, when the portion of the workforce employed in agriculture shifted from roughly 80% to just a few percent. However, in the shift out of agriculture, the transformation took place over a century and a half, not several decades.

In addition to the speed of the change, there are two other challenges.

The first challenge relates to the feasibility of preparing the entire labor force to move into jobs that require capabilities at the higher O*NET levels. The level 6 anchoring tasks in Table 2 are not only difficult for IT and robotics systems to carry out, but they are also difficult for many people to carry out. We do not know how successful the nation can be in trying to prepare everyone in the labor force for jobs that require these higher skill levels. It is hard to imagine, for example, that most of the labor force will move into jobs in health care, education, science, engineering, and law.

The second challenge relates to the further improvement of the capabilities of IT and robotics. Even during the limited period covered by the exploratory study, there was some indication of the advancement of capabilities. And over an additional period of several decades of R&D, the capabilities required for the level 6 anchoring tasks might well be within reach.

These challenges—the speed of the transformation, the difficulty of level 6 tasks for many people, and the continued development of IT and robotics capabilities—suggest that the economic adjustment to the application of IT and robotics capabilities over the next several decades is likely to be quite difficult. Although economists are right in principle that displaced workers should be able to move into new positions—as long as there is substantial labor demand for tasks that only people can perform—it seems unlikely that the structure of the labor market can change as quickly as the technology is advancing.

Even if alternative jobs are available, how will the displaced workers acquire the necessary skills for the new tasks? At some point it will be too difficult for large numbers of displaced workers to move into jobs requiring capabilities that are difficult for most of them to carry out even if they have the time and resources for retraining. When that time comes, the nation will be forced to reconsider the role that paid employment plays in distributing economic goods and services and in providing a meaningful focus for many people’s daily lives.

The preliminary review presented here suggests that society must begin to be much more serious about understanding the potential for IT and robotics to cause disruptive changes to the labor market over the next few decades. The scale and speed of this potential change are too great to be able to depend on ordinary economic adjustment to smooth out disruptions in the labor market.

Over the next decade or two, it is essential for researchers and national policymakers to understand the growing capabilities of IT and robotics and their implications for the workforce and the economy. The anecdotal articles that regularly appear about new technologies are not sufficient to provide the basis for understanding the full range of capabilities being developed and how they will affect employment.

To advance this understanding, the basic approach discussed here—comparing the full range of IT and robotics capabilities with the full range of capabilities used in different occupations—should be carried out more systematically and in more detail. Such systematic reviews need to be carried out once or twice each decade to make it possible to track the development of the capabilities and anticipate the full range of their consequences.

Society has the tools to think systematically about the capabilities that are now being demonstrated by IT and robotics systems and how those compare to the capabilities that are used in the workforce. The nation needs to begin to carry out the analyses that these tools allow to better understand the potential for IT and robotics to transform jobs in the years ahead.


Stuart W. Elliott () is visiting the Organisation for Economic Co-operation and Development as an analyst, while on leave from the Board on Testing and Assessment of the National Research Council. A detailed version of the survey of emerging capabilities will appear in the forthcoming Oxford Handbook of Skills and Training.

Forum

Forum

Wet drones

Bruce Berkowitz’s “Seapower in the Robotic Age” (Issues, Winter 2014) is a timely piece. He makes the astute observation that the revolution in unmanned aerial systems is but the first in a wave of robots that will likely appear next in the maritime domain. He provides an historical perspective of past innovations at sea, a surprising number of which involved semi-automatic systems, beginning with the century-old torpedo. He identifies possible applications of robots at sea and discusses a range of pitfalls and problems.

But Berkowitz misses or underemphasizes two key issues that may complicate the deployment of robots at sea: growing cyber insecurity and the legal uncertainty of robot self-defense.

The recent use of unmanned aerial systems, which began to unfold in the early 2000s, was deployed against fairly unsophisticated enemies. Nevertheless, at least a handful of expensive drones have been lost to “hacking,” compelled to defect, so to speak, to foreign air space. How much more will maritime systems be subject to cyber attack? I suspect significantly more.

Maritime systems move more slowly and often “loiter.” They can be intercepted by both manned and unmanned sea or air systems. Additionally, unlike air systems, which are difficult to capture because hacking the system typically risks losing the air frame and cargo to the powers of gravity and crash landings, a floating, unmanned maritime system is relatively easy to capture. It could be argued that an expensive, state-of-the-art unmanned maritime system would pose an especially appealing target to rival powers. Additionally, it might prove unnecessary to hack the system. Rather, any means that disables the propulsion system could result in capture by other drones, high-speed manned vessels, or by low-tech means such as nets and a couple of strong fishermen.

What would follow in the wake of human capture of our unmanned systems? I submit it will get complicated by a host of issues that Berkowitz, to be fair, did not have the time or space to address. For example, would unmanned maritime machines have the right to fight in self-defense to avoid capture by other machines or by human captors? If not, would it be necessary for human combatants to remain nearby in order to defend otherwise vulnerable maritime drones? To those who believe such a conundrum unlikely, remember that in Fall 2012 the Iranians made several attempts to intercept, and by some reports fired at, unmanned U.S. drones operating in the Persian Gulf. The attacks were met not with armed drones, but by the use of manned aircraft to accompany the unmanned systems. But to this day, the rules governing drone self-defense are unclear.

Berkowitz is certainly correct in his broad predictions: A revolution at sea is coming, and it will involve unmanned systems. But with this wave of machines will come confusion, ambiguity, legal wrangling—a proverbial storm. No doubt a fog of uncertainty will accompany the issue of robot self-defense, providing proof that at least some of Clausewitz’s 19th century observations are indeed timeless, even in the robotic age.

MARK HAGEROTT

Distinguished Professor of Cyber Security

U.S. Naval Academy

Annapolis, MD


Bruce Berkowitz offers a reasonably comprehensive and balanced assessment of the current state of play regarding unmanned maritime systems (UMS). However, one of his statements—that the Navy cannot, as a matter of DoD policy, deploy UMS that automatically identify and destroy vessels that meet their criteria for being hostile—needs some additional explanation. Modern mines are a form of lethal UMS. They await a predetermined signature along with other criteria, which, if met, causes them to explode. Some mines, such as the old encapsulated torpedo (CAPTOR) would release a MK-46 homing torpedo if a contact set it off. Other mines, such as the U.S. Navy’s submarine launched mobile mine (SLMM), can be launched at a distance and navigate to where it will lie in wait. In each of these cases there is movement involved, in the first instance to kill and in the second to arrive at the ambush position. The only movement not involved is movement to search. The SLMM, as well as stationary mines, are available for Navy use, so the restrictions in the DoD policy document are rather narrow and technical. Even these might not be viable much longer as warheads become ever more discriminating. Unmanned systems have so much promise for maintaining U.S. dominance in the undersea environment that progress will occur. Berkowitz is right: Unmanned systems will reduce risk to sailors and will allow the Navy to maintain certain types of presence with a smaller fleet.

One class of system that Berkowitz does not mention is the amphibian, a vehicle that is let loose in the water but crawls up on land. There exists any number of potential uses for this type of system in a complex littoral, especially one featuring offshore islands. Berkowitz also gives short shrift to unmanned aircraft launched from underwater. He shouldn’t, because this concept has considerable potential, especially for small flyers. One can imagine encapsulated UAVs lying on the sea floor waiting to receive a signal to cut loose of their ballast and float to the surface, releasing a UAV that performs a search pattern and broadcasts its findings, or perhaps radiates a deceptive signal to confuse an enemy. It might also serve as a communications relay, retransmitting low-power transmissions between a submarine and forces over the horizon. Again, the number of potential uses for this kind of undersea/airborne robotic vehicle marriage is almost unlimited, especially if we think in terms of air vehicles that can be folded into a torpedo-sized canister.

Reduced fleet size and evolving threats virtually guarantee that the Navy will invest in all manner of robotic systems. Just as mechanization and automation allow a single Midwest farmer to tend over a thousand acres with perhaps one part-time assistant, the advent of robotics will permit fewer sailors on fewer ships to conduct missions that formerly required many more.

ROBERTC. (BARNEY) RUBEL

Dean, Center for Naval Warfare Studies

Naval War College

Newport, RI


Useful models

Andrea Saltelli and Silvio Funtowicz (“When All Models are Wrong,” Issues, Winter 2014) provide a checklist to aid in responsible development and use of models. I agree with most, but not all, of their comments and suggestions. Their discussion deals with models in all fields, many of which are empirical, and many deal with the capricious nature of human actions. However, some models have a strong foundation anchored in the physical laws of nature. The best example, perhaps, is the models used for numerical weather prediction (NWP), which do not follow some of the rules proposed on the checklist. Predicting weather entails dealing with odds, owing to the innate lack of predictability associated with what mathematicians now call “chaos,” the tremendous sensitivity of results to small perturbations whose importance grows over time, eventually rendering a weather forecast useless after 10 to 14 days. But weather prediction has advanced enormously by using complex numerical models, built around the physical laws expressed as equations (Newton’s laws of motion, conservation of mass, conservation of energy and the thermodynamic equation, equation of state). The reliability associated with 3-day forecasts in the 1970s is now possible for 6-day forecasts. The model complexity continues to grow, and the world’s largest supercomputers are used to carry out the computations. Saltelli and Funtowicz’s recommendation that stakeholders be able to replicate the results is absurd in this case. The validity of the models is constantly tested as the weather develops, and the feedback is used to refine the models.

This is not so much the case for climate models. These models are used for future projections that cannot be verified for decades, and so they are policy instruments. They are built on the NWP atmospheric models but with the inclusion of other parts of the climate system, such as the land surface, oceans, and ice masses. Many aspects of the climate problem hinge on how well many of the interactions are represented, and in this case the physical laws are not known or are very complex. Processes not explicitly represented by the basic dynamical and thermodynamic variables in the equations on the grid of the model need to be included by parameterizations. These include processes on smaller scales than the grid such as convection and boundary layer friction and turbulence, processes that contribute to internal heating such as radiative transfer and precipitation, both of which require cloud prediction, and missing processes such as land surface, carbon cycle, chemistry, and aerosol processes. While our knowledge of certain factors increases, so does our understanding of factors we previously did not account for or even recognize, and hence uncertainty is apt to increase.

The current practice with climate models is to continue to build them to include as much complexity as possible in order to replicate the real world. The process of model development never ends. In general each new generation of such models does show improvements. Older versions of the models, which, it can be argued, are better evaluated in the literature and somewhat understood, are cast aside for the latest and greatest. However, it can be argued that predictions or projections that correspond to a given “what-if ” emissions scenario should be based on a known model whose results are reproducible. Yet new versions of climate models are created, and runs made with them are immediately made available to the community for use in Intergovernmental Panel on Climate Change (IPCC) reports without adequate testing or evaluation. Although some IPCC models deliberately have modest evolutions, some are “bleeding edge” models that are not yet tried and tested. The practice violates many of principles outlined by Saltelli and Funtowicz. The question is whether the balance is right between building the next generation model and exploiting the known model.

Transparency is a desirable goal but one that is easily undermined. Another difficulty not discussed by Saltelli and Funtowicz is that in climate science there are vested interests and deniers of climate change whose goal it seems is to undermine the science and projections using any means possible. Many of the denier arguments have been proven wrong time and again, but they keep reappearing.

Models are useful for many purposes, but they can easily be abused and should not be used as black boxes without full understanding of the approximations, assumptions, limitations and strengths. Models are tools and can be extremely valuable if used appropriately.

KEVIN E TRENBERTH

Distinguished senior scientist

Climate Analysis Section

National Center for Atmospheric Research

Boulder, CO


As a researcher in uncertainty quantification for environmental models, I heartily agree with Saltelli and Funtowicz that we should be accountable, transparent, and critical of our own results and those of others. Open access journals, particularly those accepting technical topics (e.g. Geoscientific Model Development) and replications (e.g. PLOS One), would seem key, as would routine archiving of preprints (e.g. arXiv.org) and (ideally non-proprietary) code and datasets (e.g. FigShare.com). Academic promotions and funding directly or indirectly penalize these activities, even though they would improve the robustness of scientific findings.

However, I found parts of the article somewhat uncritical themselves. The statement “the number of retractions of published scientific work continues to rise” is not particularly meaningful. Even the fraction of retraction notices is difficult to interpret, because an increase could be due to changes in time lag (retraction of older papers), detection (greater scrutiny through efforts such as RetractionWatch.com), or relevance (obsolete papers not retracted). It is not currently possible to reliably compare retraction notices across disciplines. But in a study by Daniele Fanelli of scientific bias, measured by fraction of null results, geosciences and environment/ecology were ranked second only to space science in their objectivity. It is not clear that we can assert there are “increasing problems with the reliability of scientific knowledge.”

There was also little acknowledgement of existing research, such as the climate projections used in UK adaptation, on the question of which of the uncertainties has the largest impact on the result. Much of this research goes beyond sensitivity analysis, which is part of the audit proposed by the authors, because it explores not only uncertain parameters but also inadequately represented processes. Without an attempt to quantify structural uncertainty, a modeller implicitly makes the assumption that errors could be tuned away. While this is, unfortunately, common in the literature, the community is making strides in estimating structural uncertainties for climate models.

The authors make strong statements about the political motivation of scientists. Does a partial assessment of uncertainty really indicate nefarious aims? Or might scientists be limited by resources (computing, staff, or project time) or, admittedly less satisfactorily, statistical expertise or imagination (the infamous “unknown unknowns”)? In my experience modellers may be resistant enough to detuning models and broadening uncertainty ranges without added accusations about their motivation. It would be better to simply argue for the benefits of uncertainty quantification. By showing that sensitivity analysis helps us understand complex models and highlight where effort should be concentrated, we can be motivated by better model development. And by showing where we have been “surprised” by too-small uncertainty ranges in the past, we can be motivated by the greater longevity of our results.

TAMSIN. L. EDWARDS

Research associate

School of Geographical Sciences

University of Bristol

Bristol, UK


Green skies

In “Greenhouse Gas Emissions from International Transport” (Issues, Winter 2013), Parth Vaishnav addresses a concern we share, climate change. Before commenting on “market-based measures” to reduce greenhouse gas emissions, I want to offer some context about the aviation industry, Boeing, and the environment.

Aviation is an essential part of modern life, with about 3 billion people boarding commercial airplanes every year. Even with increasingly sophisticated digital technologies and social networks, airplanes retain a unique ability to bring people together. Commercial air travel also helps foster economic development and trade. Our industry generates about 5% of global GDP and supports an estimated 56.6 million jobs, including about 170,000 at Boeing.

And as an industry, we understand that environmental responsibility plays a crucial role in our long-term license to grow.

Since the late 1950s, Boeing has improved the fuel efficiency of our airplanes by 70%, which is essential to our customers, not only for environmental impact, but also because of the rising costs of fuel. On a per-passenger-mile basis airplanes today are more efficient than cars and many other forms of transportation. Today, commercial air travel produces about 2% of global manmade CO2 emissions, which is projected to increase to 3% by 2030. This is why Boeing and our industry continue to take action to reduce emissions and improve efficiency.

The aviation industry was the first sector to set ambitious targets for CO2 emissions reduction, including industry-wide carbon-neutral growth beginning in 2020 and a 50% reduction in net CO2 emissions in 2050 compared to 2005 levels.

Boeing R&D investments focus on innovations in propulsion, lightweight materials, and avionics that improve the environmental performance of our products. These innovations are reasons why our 787 Dreamliner is 20% more fuel efficient—and produces 20% less CO2 emissions—than the airplane it replaces. In addition, we work aggressively with global partners to commercialize sustainable aviation biofuel, and we engage research institutions around the world to improve the efficiency of flight. We also advocate for modernized air traffic management systems that would cut carbon emissions for all airplanes flying by an estimated 12%.

It’s also important to note that our industry has agreed that global market-based measures may play a role to bridge a short-term emissions gap before these new technologies reach their potential. We believe that any money generated from these measures should be put to use to find innovative ways to continue to reduce emissions.

Innovation and new technology will always be at the heart of aerospace. At Boeing, we are actively testing lower-emission aircraft, including a blended wing-body design and hydrogen-powered propulsion. We are also working with the National Aeronautics and Space Administration to explore hybrid, solar, and electric-powered airplanes to create cleaner modes of flight in the decades to come.

Our industry is building on its demonstrated progress and holding ourselves accountable to support continued global economic growth and create a more sustainable future.

JOHN TRACY

Chief Technology Officer

The Boeing Company

Seattle, WA


Farmer suicides

Keith Kloor’s article (“The GMO-Suicide Myth,” Issues, Winter 2014) does a disservice to its scientific audience, and I take issue with it. Not with its thesis that Bt cotton is not directly causing Indian farmer suicides: that is obvious, and could be shown simply by noting that the biggest spike in farmer suicides occurred in Andhra Pradesh in 1998, four years before Bt cotton was even on the market. (The 1998 suicides were publicized in the Wall Street Journal and other newspapers, and received international attention; how short our memory is.) Or one could simply summarize the 2011 Gruère and Sengupta article in Journal of Development Studies showing that suicides have not climbed as Bt cotton has been almost universally adopted in India.

What I take issue with is the use of human tragedy in rural India simply to land a few blows in the relentless genetically modified organism (GMO) brawl. As I have pointed out before, both sides in the brawl claim the suicide epidemic bolsters their case, and neither shows concern for actually understanding what is behind it. Despite a headline invoking the “the real reasons why Indian farmers take their own lives,” this article mentions almost none of the serious scholarship on the topic, omitting even A. R. Vasavi’s insightful and widely-read Shadow Spaces: Suicides and the Predicament of Rural India.

Farmer suicide is a complex problem that can hardly be blamed on a bank policy change that Kloor heard about at a conference. Most small farmers don’t even borrow from banks, and anyway this begs the question of why cotton farmers’ need for credit has risen. State-encouraged, pesticide-intensive hybrid cotton spread during the 1990s, contributing to intractable problems in ecology, farm economics, and farmer decisionmaking. There were social effects as well, as risk and debt became increasingly individualized, unmooring farmers from sources of support.

But Kloor’s goal was not to understand the problem of farmer suicide, but rather to use it to whip up hatred toward Vandana Shiva and “liberal and environmentalist circles,” where GMOs are unpopular. The intent was to turn a complex social science question into a moral fable. Moral fables need villains (as Kloor himself notes), and egged on by Ron Herring, he uses the plight of Indian peasants to villainize Shiva, just as Shiva uses the peasants to villainize Monsanto.

Of course Shiva is wrong on Bt cotton killing farmers, as is Patrick Moore’s hysterical charge that Golden Rice critics are murdering Asian kids. For GMO brawlers like Kloor and Moore and Shiva, the aim is to enflame the like-minded, and hopefully to spread “motivated reasoning” to the undecided. Motivated reasoners use low standards of proof for claims they like, high standards for ones they don’t, and fixate on trashing opponents’ weakest arguments instead of actually considering their strongest. Villainization encourages motivated reasoning, and then charges of murder by the likes of Shiva and Moore really clear the benches.

In other writing Kloor calls GMO opponents unscientific. However, I would suggest that it is articles like this, which bash one side’s irresponsible claims but not the other’s, and which aim to create exasperation rather than insight, that are the real impediments to the scientific understanding of our world.

Glenn Davis Stone

Professor of anthropology and

environmental studies

Washington University in St. Louis

St. Louis, MO

All Adaptation Is Local

The Military of the Future

WILLIAM A. STILES JR.

All Adaptation Is Local

Attention to the political context of coastal communities will be necessary if the United States is to improve on its current storm-by-storm approach to climate adaptation.

Decades of climate science and years of public policy research came together last year on the losing side of a 4-1 vote approving a rural development along Virginia’s Chesapeake Bay shoreline. For anyone working on adaptation to rising sea level, that one decision crystallizes the issues involved.

A developer wanted to put a few hundred homes, a $40 million hotel, 34,000 square feet of retail, and a marina on a piece of soggy coastal land at the end of a peninsula. The land had been designated for conservation/open space use, and the developer needed the Board of Supervisors to change that zoning and allow him to build along the shoreline rather than on the adjacent upland parcel that lacked waterfront views and access.

The developer claimed that the planned community would, “…create significant employment, provide significant economic stimulus and tax base, honor the Maritime Heritage of Northumberland County, honor local architecture, promote tourism for all of Northumberland County, provide educational programs to school children and all residents, provide services to the retirement population and other residents of the county, and provide waste water capacity to some neighbors.” These hyperbolic claims have been made for decades by people seeking to build on coastal land, and now we’ve ended up with pretty ordinary coastal developments.

For decades, developers and shoreline communities alike have seen these coastal projects as cash cows, producing the highest-value homes and generating the highest real estate taxes. Now, however, sea level rise threatens to turn these developments into money pits for localities, because future flooding mitigation costs will exceed property tax revenues.

Our group, Wetlands Watch, argued against this development proposal, saying that even at historical rates of sea level rise (measured as 1.5 feet per century since monitoring began in 1927), the project would be increasingly subject to flooding from above by storm surges as well as flooding from below as the perched water table rose into the development. Projected sea level rise at more than twice current rates would hasten this outcome. Given that a house will last 100 years or more with proper maintenance, we argued that the board of supervisors should take these centennial rates into account and calculate the taxpayer liability for adaptation measures required by this subdivision in the future. We argued that the Board of Supervisors needed to conduct a “life-cycle” cost assessment before proceeding.

We argued, using all the facts available from the body of work on sea level rise effects, and captured a solitary vote, losing the decision to the alignment of interests that have fueled coastal development unabated since World War II.

I am sure the county supervisors read the letter from the developer promising that utopian future emerging from this subdivision. I doubt that the supervisors had read the latest report from the Intergovernmental Panel on Climate Change (IPCC). Nor had they used any of the proliferation of adaptation “tool kits” being cranked out for coastal communities by academic institutions and government agencies, participated in the numerous Webinars dealing with sea level rise adaptation, attended the National Oceanographic and Atmospheric Administration (NOAA) workshop on coastal resilience, or read the U.S. Global Change Research Program’s Synthesis and Assessment Project 4.1 Report, Coastal Sensitivity to Sea-Level Rise: A Focus on the Mid-Atlantic Region.

These are regular people, with full-time jobs outside of government, whose main task on the Board of Supervisors is keeping their municipal government running through the end of the fiscal year. Their time and ability to seek out the growing body of work on sea level rise adaptation are limited. Unfortunately, the ability or interest of those producing this body of work to bring it down to local-level decisionmaking seems just as limited.

Many of those involved in the science and public policy of climate change envision a decision process in which research results carry the day and drive action, and national-scale data underlie uniform national policy. Between that dream and the reality of development decisions such as this one, democracy rears its ugly head. Local elected officials trend less toward Nobel laureate climate scientists and more toward your Uncle Bobby with his 2 years of community college and 15 years of running a wholesale plumbing business. Oh, and by the way, Uncle Bobby and his colleagues on the county board also happen to be in charge of adaptation decisions regarding climate change effects such as sea level rise, a condition of reality that the Nobelists haven’t managed to integrate into their climate models.

Local governments control private land-use decisions, issue business permits and occupancy permits, fund and build secondary roads, build and run schools, operate fire and emergency services—in short, control most of the factors that allow people to live and work on the land. Frustratingly for those who seek uniform adaptation policy, few federal regulatory statutes currently reach through to a local government’s land-use decisions. Only federally mandated floodplain plans, emergency management plans, wetlands permitting, and a handful of other statutes affect those decisions. Even more frustrating to those of us seeking restrictions on coastal development, local governments have wide discretion in implementing and allocating funds from an array of federal and state programs dealing with economic development, transportation, health and welfare, and the like. Without precautionary restrictions on implementation, localities can spend program funds without regard for future climate effects that are all but inevitable.

True, some of those state and federal laws and regulations have explicit prohibitions and restrictions about using the programmatic authority or funding in high-risk areas. For example, federal programs require many precautions and analysis of alternatives if a proposed project is in a 100-year floodplain. However, no federal law or regulation requires localities to anticipate future conditions in these precautions. Estimates of risk (i.e., the area of the 100-year floodplain, the intensity of storms, the recession rate of shorelines, and so on) are retrospective and based on a record of past occurrences. The only mention of future climate change risk in any operational federal program is an engineering guidance by the U.S. Army Corps of Engineers that requires that agency to use a prospective sea level rise formula when constructing military buildings along the tidal coast.

So when those county supervisors were considering the proposal to develop that parcel of soggy coastal land, only a citizens’ group and a few environmentalists were saying “No.” All the state and federal agencies involved in that development saw no problem with the proposal and were blind to any danger from sea level rise.

The Veterans Administration and the Department of Housing and Urban Development will guarantee any mortgage in the subdivision, regardless of the home’s elevation. The county’s Community Development Block Grant funds from the U.S. Department of Commerce can be used to pay for infrastructure in the community. The U.S. Department of Transportation will pay for any transportation segments eligible for federal cost-share payments. The Environmental Protection Agency will permit the sewage plant, lacking any authority to deny a permit based solely on future flooding risk. Even the National Flood Insurance Program run by the U.S. Federal Emergency Management Agency will offer flood insurance to every homeowner in this development based on current, not projected, measures of mean sea level to define floodplains and compute risk and premiums.

On and on, every federal and state government program aligns with coastal development interests, as if nothing has changed, to continue the status quo. Every federal and state agency program that touched this development proposal on soggy coastal land gave it a green light.

For some agencies, such as the U.S. Department of Commerce, this situation is especially puzzling. On one side of Commerce, in NOAA, much time and effort is spent in educating coastal communities about the need to adapt to sea level rise and other climate effects. On the other side of Commerce, the Economic Development Administration (EDA) keeps doling out development dollars to coastal communities without any mention of sea level rise, no requirement for evaluating future climate change impacts, and no restrictions on the use of these funds based on those future impacts.

When the county board considers a development proposal, Uncle Bobby can either take a position based on a range of climate-model projections from one part of the Department of Commerce, or he can take a position based on the willingness of another part of the Department of Commerce to provide unrestricted funding for the proposal. He can upset the developer and perhaps anger his neighbor who owns the land in question, because of a climate tool kit coming out of NOAA. Or Uncle Bobby can avoid acrimony and agree with the EDA that this development is an idea worthy of federal taxpayer investment.

In current efforts to slow coastal development, it is folly to expect localities to go against the developer’s promise of jobs, a higher tax base, and more tourism when these positions are supported and underwritten by the state and federal governments. It is folly for anyone to expect an outcome different from the 4-1 decision made by the county board on the development we were opposing.

A thought experiment

Imagine for a moment that the county Board of Supervisors was directly exposed to the body of work on sea level rise. Imagine a metaphorical conversation between the Nobel laureates at the IPCC and your Uncle Bobby, county supervisor. What if Bobby agreed with the Nobelists and with NOAA and all the agencies issuing warnings, and felt that climate change was real and needed to be accommodated in local government policy?

Bobby would be presented with a range of projections, reflecting a range of possible responses to climate change by global-scale natural systems and a range of actions to be taken by the global community in reducing greenhouse gases. Should he look at the latest Summary for Policymakers report from the IPCC, he could clearly see for himself what to expect:

“Global mean sea level rise for 2081-2100 relative to 1986–2005 will likely be in the ranges of 0.26 to 0.55 m for RCP2.6, 0.32 to 0.63 m for RCP4.5, 0.33 to 0.63 m for RCP6.0, and 0.45 to 0.82 m for RCP8.5 (medium confidence). For RCP8.5, the rise by the year 2100 is 0.52 to 0.98 m, with a rate during 2081–2100 of 8 to 16 mm yr (medium confidence). These ranges are derived from CMIP5 climate projections in combination with process-based models and literature assessment of glacier and ice sheet contributions (see Figure SPM.9, Table SPM.2).”

Recognizing that Uncle Bobby and his peers are unlikely to act on the basis of that language (or even make any sense of it), a communications expert from NOAA will tell him to expect between one and three feet of sea level rise globally by this century’s end. However, the expert will also tell him that this parcel is located at a specific point on the globe and that the rate and range of sea level rise there, on that property, involves a range of other factors.

Virginia is experiencing the highest measured rate of relative sea level rise on the Atlantic coast, with land subsidence contributing as much to the gradual inundation as the actual rising seas. Also at issue is the apparent slowing of the Atlantic Meridional Overturning Circulation, known to Uncle Bobby as the Gulf Stream. All of this taken together leaves the soggy parcel with between 4 and 6 feet of relative sea level rise expected by the end of this century.

What spoiled policymakers have come to expect from the science and technology (S&T) community is certainty, not a range of estimates. (The S&T community is complicit in these expectations, having sought over the years to broaden its involvement in and impact on public policy decisions by promising useful information in return for research support.) Should the county board ask the S&T community to help pick the right number within that 4- to 6-foot range, they would be told that it is a policy decision, not a scientific one. Further, the S&T community would say, where sea level rise eventually ends up within this range of estimates is complicated, since we are dealing with large global systems. Also, eventual effects will be determined by larger policy decisions such as whether greenhouse gas reduction actions will be taken by Beijing and . . . on and on into the nuanced body of work on climate change effects and mitigation.

For the local government officials, picking the “right” number is critical. Do they act on this development proposal or not? If they deny the development, can they defend their actions, possibly in court, since consideration of prospective sea level rise effects is not in any underlying legal authority? If they approve it, do they require additional freeboard (elevation of living space an additional increment above the minimum required by floodplain ordinances) on structures in this development? At what elevation do they set the freeboard? Do they need to amend their floodplain ordinances or building codes to incorporate these sea level rise projections? How do they design utilities and road segments in this development to address projected sea level rise? What engineering standards do they use for that work?

As this conversation drags on, we’ve again lost Uncle Bobby, who realizes that he will be long dead before anyone can narrow the uncertainty surrounding this development proposal. Each adaptation decision facing local government involves additional political effort and financial cost.

Although that additional cost may be a fraction of the costs avoided in the uncertain future, it is an utterly certain price paid by today’s taxpayers, all of whom vote in Bobby’s next election campaign.

A recent conversation with a storm water engineer in a coastal Virginia city illustrates these challenges to adaptation. The engineer was in charge of a $20 million contract for installing a storm water line. The low-lying city is challenged by sea level rise and the elevation of shallow groundwater tables. The engineer asked the contractor what it would cost to build the system to accommodate the current (1.5 feet per century) regional rate of sea level rise. The answer was an additional $5 million for pump stations and related hardware.

The engineer knew that it was not politically possible to ask for a 25% increase in the cost of the project, all borne by city taxpayers, based on projected effects that would occur decades hence, years after the current city council members had retired. Lacking any requirement for sea level rise adaptation by state or federal environmental agencies, any engineering guidance, or any additional cost-sharing for the adaptation actions, the project went in the ground as originally designed.

The challenge

In fairness, much of the challenge lies not with climate scientists narrowing the range of estimated effects, or even in communicating those risks more effectively. The challenge lies in getting policymakers adapted to living in the new reality of climate change, when they never even adapted to the old reality of gradual sea level rise. In this new reality, decisions must be made on the basis of estimated effects with wider ranges of variability or even using scenarios rather than quantifiable projections. The “spoiled policymakers” referenced above must now live with uncertainty, gradually moving toward more-precautionary strategies as they learn over time from the failures of early, smaller, incremental adaptation approaches. They must learn to live in a world in which the past is no longer prologue and retrospective views of past conditions no longer provide guidance for the future. They must extend their time horizons well beyond the next election.

These adjustments will be hard, especially hard given our lack of experience with climate change impacts such as sea level rise. For most of the past 5,000 years, we have enjoyed— on geologic time scales—atypically stable climate and sea levels. As a result, we have nothing in our literature, law, architecture, engineering, or any other discipline that addresses changes of the order that we will be experiencing in the future. In western Judeo-Christian culture we have two references to what we face: the tale of Noah’s Ark and a children’s story about a Dutch boy preventing the failure of a dike. Not much on which to base major social change. (Although to be fair, the Dutch do take the preservation of their dikes very seriously.)

This work will be made harder by the expense and disruption it will generate. The work we envision will be expensive: learning prudent precautionary levels of adaptation the hard way, storm by storm; buying out properties, even whole communities; paying full actuarial rates on insurance for the coastal risks we face; stacking up rock, sand, and concrete— millions of cubic yards of concrete—to block the waves; withdrawing public support for entire classes of coastal activities, stranding property owners along the shore; and enduring expensive lawsuits as we take these steps, struggling to shape law and policy to fit the new reality. All of this over coming decades will cost trillions of dollars. There will be big losers and, perhaps, a few winners as we unwind our existing relationship with the shore. Of course, doing nothing will cost even more and make everyone losers in the end.

Progress on adaptation will also be conditioned on the reaction of the private sector, a fragmented confederation of interests that is largely being left outside of adaptation conversations. Attempts have been made to include them at higher levels in the dialogue, with large corporations, global reinsurers, and the like expressing concern at national and international conferences. At Uncle Bobby’s level, with the local chamber of commerce or regional business association, there is little to no private-sector voice in support of caution.

As a result, when adaptation measures are discussed, the private-sector reaction at the local level is driven by those who are directly and immediately affected, a sector of businesses dominated by real estate sales, development, and contractor interests. Most of these companies have planning horizons that extend into the future precisely to the point of sale of a property. Any actions that affect the cost of construction of the property or lower the appeal and price of the property will be opposed. Falling into this category of action are most prudent sea level rise adaptation options such as additional floodplain restrictions, properly priced private and federal flood insurance coverage, and public identification or designation of future flooding areas. Actual restrictions on development or redevelopment in those future flooding areas will be strenuously opposed. This has been illustrated in negative reactions to initial attempts at sea level rise adaptation, from the Outer Banks in North Carolina to the San Francisco Bay region of California.

Acting against all of this seemingly insurmountable resistance is the inevitability of the changes we face along the coast. Gradually the private sector will shift its position as risk gets priced into coastal communities, actuarial reality overcomes inertia, and impacts pile up. Coastal communities will find themselves inundated with costs, complaints, and water—lots of water. The public’s relationship with the shoreline will shift from “live on the water,” to “live with the water,” to “move away from the water and come back for a visit from time to time.” Government programs at all levels will withdraw support for further coastal development. With each storm and recovery, low-lying communities built along beaches, strand roads, and at the end of peninsulas will be left on their own and slowly fade away. The life-cycle cost of our changed relationship with the shore will finally be calculated.

This is the messy reality we face, with Uncle Bobby and his peers in charge of adaptation decisions across hundreds of coastal communities. This is the outcome that looms behind the rationality of tool kits, Webinars, conferences, and expectations by Nobelists that facts alone can change behavior in time to avoid the worst.

Clark Williams-Derry of the nonprofit Sightline Institute lays out our challenge well:

“Convincing people that you’re right about an issue— say, the scientific consensus about the threat posed by global warming—can seem vitally important, but in the end may be somewhat beside the point. In the long run, you have to move the debate beyond beliefs, and into incentives: lining up the economic and social incentives such that the right choices are the easy, natural ones. To do that, we need smart and effective policies. Appeals to people’s reason may help, but rational belief alone won’t carry the day.”

We’re back to the decision on that soggy piece of coastal land in rural Virginia and our misalignment of policies and incentives that made that 4-1 decision the easy, natural, and even rational one. We need to look at these individual decisions, pick them apart, speak to local decisionmakers, develop those “smart and effective policies,” and then generate support for them in coastal communities, one by one.

For years, we have acted in ignorance, before we could clearly see the permanence of coastal change. Or if not ignorance, then denial, as we dumped sand on beaches and moved lighthouses and historical homes farther inland from the eroding coast. Today we seem to be acting out of indifference, believing in our ability to continue as we have for decades, hoping all of this is not true. Soon we will be acting out of inevitability, making different choices because there is no other option.

Yet each storm presents a teachable moment, an event that causes those who live on and make decisions about the local lands to rethink where their true interests lie, and gives ever-stronger voice to those who are advocating for adaptation. The gamble we have engaged is that our short-term interests will outweigh the long-term costs of failing to adapt. Storm by storm, the odds on this gamble will grow longer, but meanwhile how many more new coastal developments are going to be approved and how many old coastal developments are going to be drowned?

The trick is to move more rapidly toward the inevitability of action and minimize the greater expense and disruption that comes from having started too late. That will require many hours of conversation at the local diner’s corner booth between Uncle Bobby, the Nobelists, and government policymakers, in hundreds of municipalities along the nation’s coasts. It will also, unfortunately, involve far too many more storm and flooding events as conversation starters.


William “Skip” Stiles ()is executive director of Wetlands Watch, an environmental nonprofit that has been working statewide in Virginia for over six years to help localities adapt to sea level rise. Before this work, he spent 22 years in a number of staff positions in the U.S. House of Representatives, as chief of staff to the late Congressman George E. Brown Jr., and as legislative director for the House Science Committee.

Water

The Military of the Future

Edward Burtynsky

WATER

Internationally renowned Canadian photographer Edward Burtynsky’s latest body of work, Water, explores the course, collection, control, displacement, and depletion of this vital natural resource. The exhibition is the second initiative of the New Orleans Museum of Art (NOMA)-Contemporary Arts Center (CAC) programming partnership and features 60 large-scale photographs that form a global portrait of humanity’s relationship to water. The exhibition runs October 5, 2013, through January 19. 2014, at the CAC in New Orleans.

Burtynsky has long been recognized for his ability to combine vast and serious subject matter with a rigorous, formal approach to picture making. The resulting images are part abstraction, part architecture, and part raw data. In producing Water Burtynsky has worked across the globe—from the Gulf of Mexico to the shores of the Ganges—weaving together an ambitious representation of water’s increasingly fragmented lifecycle.

“Five years in the making, Water is at once Burtynsky’s most detailed and expansive project to date, with images of the 2010 Gulf oil spill, step wells in India, dam construction in China, aquaculture, farming, and pivot irrigation systems,” said Susan M. Taylor, Director of the New Orleans Museum of Art. In addition, Water includes some of the first pure landscapes that Burtynsky has made since the early 1980s. These archaic, almost primordial images of British Columbia place the structures of water control in a historical context, tracing the story of water from the ancient to the modern, and back again.

Although the story of water is certainly an ecological one, Burtynsky is more interested in presenting the facts on the ground than in declaring society’s motives good or bad. In focusing on all the facets of people’s relationship with water, including ritual and leisure, Burtynsky offers evidence without an argument. “Burtynsky’s work functions as an open-ended question about humanity’s past, present, and future,” said Russell Lord, Freeman Family Curator of Photographs at the New Orleans Museum of Art. “The big question is: Do these pictures represent the achievement of humanity or one of its greatest faults, or both? Each visitor might find a different answer in this exhibition, depending upon what they bring to it.”

The exhibition, organized by Russell Lord, is accompanied by a catalogue published by Steidl with over 100 color plates from Burtynsky’s water series. It includes essays by Lord and Wade Davis, renowned anthropologist and Explorer-in-Residence at the National Geographic Society. More information can be found at www.edwardburtynsky.com.

Greenhouse Gas Emissions from International Transport

Perspectives: Rethinking “Science” Communication

PARTH VAISHNAV

Greenhouse Gas Emissions from International Transport

International transport, which includes ocean shipping and aviation, is among the fastest-growing sources of human-generated greenhouse gas emissions. Between 2009 and 2010, carbon dioxide (CO2) emissions from international transport grew faster— I at 7 and 6.5%, respectively—than those from China, which grew by 6%. Although 2010 was a year of especially rapid growth as global trade and travel bounced back from the 2009 recession, emissions from this activity are expected to grow to between two and three times their current level by 2050. This growth will start from a small but substantial base: If the sector were a country, its current emissions would be roughly the size of those of Japan or Germany.

Rising emissions from international transport could dilute hard-won reductions in other sectors, such as the switch from coal to wind and solar electricity. To see how, consider the case of the United Kingdom. In 2010, it emitted about 500 million tons of CO2. Domestic and international flights departing from the United Kingdom in that year emitted 33 million tons, or about 7% of the total. The United Kingdom has instituted a legally binding commitment to reduce its annual greenhouse gas emissions in 2050 to one-fifth of their level in 1990. This means that in 2050, the United Kingdom ought to emit a mere 120 million tons of CO2. The UK.. Department of Energy and Climate Change has forecast that under current policies to control their rise, CO2 emissions from aviation in the United Kingdom will rise to about 50 million tons, or an untenable 42% of the total.

The unchecked growth of emissions from transport is therefore inconsistent with the drastic reduction in the overall production of greenhouse gases that is required to forestall dangerous climate change. The Kyoto Protocol to the United Nations Framework Convention on Climate Change (UNFCCC) calls for the International Civil Aviation Organization (ICAO) and the International Maritime Organization (IMO) to put in place mechanisms to limit the contribution of international transport to global warming.

Who is responsible?

Given the nature of global transport, it is difficult to allocate its environmental impacts to any one country. Consider a ship that is registered in Liberia, operated by a Danish shipping line, and making a voyage from Shanghai to Los Angeles carrying products made in China by a European firm for sale in North America. How and to whom should the emissions from this voyage be allocated, and who should be assigned responsibility for reducing them? Questions such as these have proven to be politically intractable.

One reason is that the UNFCCC has traditionally operated on the principle of common but differentiated responsibilities. This principle suggests that developing countries have made a smaller historic contribution to environmental problems than have developed countries, and may not have the wherewithal to tackle them. Therefore, developing countries have argued that they ought to be exempt from taking on legally binding commitments as part of any program to curb global greenhouse gas production.

Conversely, the ICAO and the IMO have operated on the principle of nondiscrimination. This is the notion that regardless of its nationality, an aircraft or ship performing an international voyage ought to be subject to the same rules and standards. Developed countries argue that in the interests of effectiveness and efficiency, this principle is sacrosanct. That is, if developed countries take on legally binding obligations as part of a deal to reduce the environmental footprint of aviation and ocean shipping, then so must developing nations.

THE RESEARCH COMMUNITY SHOULD TAKE THE INITIATIVE TO ENSURE THAT ALL DOCTORAL AND POSTDOCTORAL TRAINEES RECEIVE INSTRUCTION IN THE ETHICAL STANDARDS GOVERNING RESEARCH.

Tools available, but dull

Progress at both the ICAO and the IMO has been sluggish. In 2011, the IMO defined an efficiency standard for new ships. It came into effect in 2013 and will be progressively tightened. For existing ships, the IMO published guidelines for voluntary energy management plans. It forecasts that this combination of measures is likely to reduce emissions from shipping by 180 million ton by 2020, or by 9 to 16% relative to business as usual. By the IMO’s own (conservative) estimate, operators would save money by adopting this standard.

The IMO admits that its measures would fall far short of compensating for the increase in emissions due to burgeoning international trade over the next few decades. It has recommended that a market-based mechanism be put in place to augment the measures adopted so far, but has not published details of how this mechanism would work.

For its part, the ICAO in 2011 asked all of its 191 member states to submit plans for greening their aviation sectors. By June 2013, 61 countries, representing about 80% of the world’s international air traffic, had done so. The ICAO reported in September 2013 that some of these plans were too sketchy for it to be able to estimate their impact on emissions.

The European Union, facing the same dilemma as the United Kingdom (that is, stringent economy-wide targets undermined by rampant growth in transport emissions), announced that it would include aviation in its Emissions Trading Scheme from 2012 onward. In particular, the European Union said that any flight, domestic or international, that departed from an EU airport would fall under the purview of its scheme.

Although the impact on airfares of the EU proposal would have been modest (in 2012, about $2 per passenger per round trip flight between New York and London) there was international outrage at the European Union’s proposal to act unilaterally. Critics pointed out that only the ICAO had the authority to impose an environmental charge on international flights.

The European Union agreed to defer implementation to give the ICAO time to develop an alternative. In September 2013, the ICAO declared that it would propose a market-based mechanism to reduce greenhouse gas emissions from international aviation by 2016 and implement it by 2020. It outlined three plausible variants of such a mechanism. The first variant was to require airlines to buy credits each year if their emissions exceeded a predefined threshold. The second was to require airlines to buy credits and also to generate revenues by applying a fee to each ton of carbon emitted. The third was to set a cap on emissions within the sector, and allocate or auction credits equivalent to this cap. Operators that exceeded such a cap would be required to buy additional credits from others who had come in under it.

The ICAO is also working on developing an efficiency index for aircraft. However, the organization has not yet set mandatory targets for the efficiency levels that current or future aircraft must reach.

Tools may be useful anyway

The measures that the IMO and ICAO have suggested so far could go some way toward addressing the problem. For instance, as a first step in implementing the proposed market-based mechanisms for both industries, accurate data on fuel burn will need to be collected. For shipping, it is not clear that sufficiently detailed data are logged at all. For aviation, these data are not made publically available even if they are logged. The availability of accurate fuel burn data, even in aggregate form, should improve the quality of debate and policymaking even before the schemes themselves have any effect.

In the absence of regulation, even improvements that are economically viable may not be made. For instance, ships are often owned and operated by different entities. The benefits of higher efficiency may accrue to whoever charters the ship. Owners would have to bear the upfront cost of a more efficient ship. The premium that they receive on chartering out these more efficient ships is often not large enough to make up for the extra initial cost. Sometimes owners may not know about the technologies available to them. Or they may not have access to finance to pay for improvements, even if they thought that they could recover their costs over time.

The fuel efficiency of air travel in the United States has improved more than that of any mode of transport in the past 30 years. And yet, recent analysis of the market showed that there was a wide gap in the performance of airlines on this measure. Moreover, the correlation between an airline’s efficiency ranking and its profitability was very small. The most profitable airlines were those that served niche markets with little competition, and so did not have a strong incentive to operate as efficiently as possible.

The IMO’s standard could provide the impetus for ship owners and operators to ensure that at least those modifications that are likely to produce environmental benefit and pay for themselves over time will be adopted. The same is true of the nascent efficiency standard for aircraft.

As such, efficiency standards can be useful. For shipping, there is scope to make the existing standard more ambitious. For aviation, the ICAO should work on developing and enforcing a standard to augment the proposed market-based mechanism.

Toward an efficient mechanism

The potential of standards is limited by the fact that after a point, reducing emissions from shipping and aviation becomes very expensive. Analysis by the IMO indicates that by 2020, annual CO2 emissions from shipping could be cut by 250 million tons using methods that would save operators money in the long term. Beyond this level, the cost of reducing each additional ton of emissions would escalate rapidly, and further cuts would become uneconomical, given current technology.

In aviation, there is some enthusiasm for the use of sustainable alternative fuels. Analysis by researchers at the Massachusetts Institute of Technology suggested that even if the feedstock could be grown on land that would otherwise have been left fallow, reducing CO2 emissions by switching to biofuels produced by currently available technology would cost $50 per ton of emissions avoided. If biofuels from soybean oil were used instead, the cost would be $400 per ton of emissions avoided.

Because the cost of making large cuts in emissions within international transport is probably prohibitive, it is economically efficient for the industry to pay for cuts to be made in other sectors, where they are cheaper to make. Imagine that the international transport sector implemented a global emissions trading scheme in which the right to emit one ton of CO2 over a certain threshold traded at $30. Analysis published in 2013 and led by Annela Anger, then at the University of Cambridge, suggested that under such conditions, net emissions from international transport would be 40% lower than they would otherwise have been. Only about 2% of this drop would come from reduced emissions in the sector itself. For the rest, airlines and ship operators would buy credits generated by the Clean Development Mechanism, developed under the Kyoto Protocol, which allows a country with an emission-reduction or emission-limitation commitment to purchase certified emission-reduction credits from developing countries. Because these purchases would produce benefits in developing countries, the net impact on global economic output would be slightly positive.

The impact of an emissions fee on airfares is likely to be small. Given that demand for international air travel is relatively insensitive to price, it is unlikely to restrict the movement of people. The cost would primarily be borne by those who are already wealthy enough to fly. This could make it palatable to developing countries, which have argued that taking on costly obligations to reduce the environmental impact of their growth would hurt their poorest citizens.

Analysis by the UN Secretary General’s Advisory Group on Climate Finance showed that putting a price of $45 on each ton of CO2 emissions from marine transport would have a minimal impact on the prices of commodities. For low-value commodities such as jute shipped from Bangladesh to Europe, the price would rise by about 2%. For high-value commodities such as coffee, the rise in price would be about 0.2%.

Even the international airline industry, as represented by the International Air Transport Association and others, has expressed support for a global scheme that allows the industry to offset its emissions by buying credits from other sectors.

Transport as test bed

The governing bodies of the ICAO and the IMO (which represent developing and developed countries) and the international transportation industry have all acknowledged the need for a mandatory global mechanism to curtail or offset the growth of planet-warming emissions from the sector. The ICAO and the IMO have long had a joint working group on harmonizing aeronautical and maritime search and rescue. Their responses to the greenhouse gas challenge have so far been developed independently. They should consider a similar group on greenhouse gas emissions, especially given that they already have converged on a similar basket of measures. They are likely to face similar problems in implementing these measures, and each should learn from the other’s experiences. For instance, the ICAO has published details of how different types of market-based mechanisms might work in the aviation sector, and the IMO could use these as a template for its own proposal. The IMO’s experience of implementing an efficiency standard for new ships could usefully inform the ICAO’s fledgling efforts to devise and roll out a similar benchmark.

Even though the impact on the global economy of attaching a price to the CO2 emitted by international transport is likely to be small, it could still be painful for some small countries, such as those whose economies rely heavily on international tourism. These countries might ask for exemptions from any global scheme, but excluding them would reduce efficiency and effectiveness. Indeed, if airlines or shipping lines routed journeys through countries that were exempt, total emissions could rise.

Auctioning credits or applying a fee to international sales of fuel oil for ships, called bunker fuel, could generate revenues for the Green Climate Fund that was proposed at the Cancun Climate Change Conference. This fund could compensate countries whose economies are disproportionately hurt, thus providing an economically efficient way to reconcile the principle of common but differentiated responsibilities with that of nondiscrimination.

Efforts to get an economy-wide, global deal on reducing greenhouse gas emissions have so far been frustrated. The IMO and ICAO have produced solutions to environmental problems such as acid rain–producing emissions from ships and noise from aircraft. All of their members have agreed to implement these solutions. International transport is an activity that clearly operates in the global commons, and where it is understood that some pooling of national sovereignty is essential.

The sector is small enough that it could buy offsets from existing sources of carbon credits, such as the Clean Development Mechanism, and that the impact on the global economy of taxing pollution from it would be small. And yet, implementation of such a policy would require policymakers to address all of the problems associated with an economy-wide solution: monitoring emissions, ensuring near-universal participation, and compensating countries that are hardest hit, as well as generating and recycling revenues for climate change mitigation and adaptation.

In addressing the specific problem of emissions from international transportation, policymakers have a test bed suitable for pioneering and evaluating innovative strategies and institutions to solve more general problems in the collective control of global climate change. This opportunity should not be squandered.


Parth Vaishnav () is in the Department of Engineering and Public Policy at Carnegie Mellon University in Pittsburgh, PA.

IDDO K. WERNICK

Living in a Material World

The doctrine of materialism, dating back to the ancient Greeks and Chinese and providing background for Descartes and Marx, argues that all phenomena found in nature can be explained by causal material factors. Because materialism is assumed to apply to all observed phenomena, it is also assumed that materialism can be applied to explain the behavior of life and systems of living things. This assumption forms a basis for the study of animal and human biology, as well as the study of ecological and social systems.

Is this so? Are life and living systems amenable to materialist explanations? Are such explanations poorly understood or are they fundamentally elusive? Does life exhibit the regularity that allows for the application of mathematics? Does the reduction of living systems to enable more precise mathematical treatment oversimplify them to the point of rendering them untrue to what they are? At some granular level, might life and living systems rely on the events occurring within an irreducible decision box that remains unpredictable?

Unlike in the physical sciences, description, more than explanation, continues to occupy most life scientists. Better description of ailments constituted much of medical practice until the beginning of the 20th century. A history of disciplined observation of the regularities found in living systems has yielded great insights (such as the germ theory of disease and immunization through vaccination) and delivered enormous health benefits in terms of increased longevity and prevented suffering. The advent of better diagnostics that enable more precise (and even dynamic) description of biological parameters continues to improve the delivery of health services to patients. Long-term statistical studies benefit large populations. Nonetheless, for the individual patient, the ability to associate symptoms with physiological mechanisms and predict health outcomes suffers because of the small sample size.

Because protecting and promoting human life remains fundamental to human society, medicine is always necessary whether or not it derives from a complete understanding of how the human body works. The patient is sick and must be treated. Honest practitioners will say, though, that despite the advanced diagnostics, a partial understanding of very basic mechanisms of how the human body works continues to be the case. Drugs that have the effect of aggravating cardiovascular problems for the same reason they are effective in reducing joint inflammation offer a case in point.

Although some proponents argue that DNA analysis will offer personal “customizeability” in future health care, the knowledge of sequence has yet to lead directly to knowledge of outcomes. The structure of the DNA molecule is known, but the syntax (and thus the meaning) of the genetic code remains mysterious. The structure itself is not deterministic. The weakness in the current knowledge of the mechanisms leading to disease is also evident at the level of organisms and their habitat, as science remains far from achieving a definitive characterization of the pathways and toxicology of the brew of synthetic chemicals that cloak the environment. In practice, the material chain of events leading to the diagnosis, treatment, and outcome of a human patient will remain uncertain. Statistical data and physiology and patient behavior, as well as physician patience and judgment, all contribute to treatment decisions. The ability to generate predictions based on statistical analyses worsens in moving from simple organisms to more complicated systems. Science can describe microbes better than it can describe adolescent girls, and describe girls better than the functioning of a modern city. Applying materialism to human social activity requires identifying parameters to measure and using those measurements to predict. Measuring the data and trusting that it can be used to predict the future responds to very practical needs. The fact that rational frameworks can be applied to describe human societies appeals to bureaucrats, businesspeople, and scientists alike. Mathematical models remove bias. The abstraction provided allows for nonideological decisionmaking.

Models as justifiers

Government bureaucrats, seeking objective explanations to justify expenditures, encourage the use of statistical models to describe the processes at work in societies. Based on model results, scientific rigor is invoked, as is the claim to objectivity, when determining how to direct public resources. Commerce itself of course benefits handsomely from the predictability of a reliable, mechanistic world. When Cornelius Vanderbilt offered regularly scheduled ship and rail service, commerce followed. Both bureaucrats and businesspeople rely on an orderly world where society operates according to rules. Mechanistic models offer an ideal, despite their lack of any consistent ability to predict.

Arguably, the scientific enterprise betrays an innate preference for systems that exhibit regularity most of all. That regularity gives meaning to a scientific description of reality that relies on the existence of fixed relationships between variables. The need for regularity may even undermine objectivity. For example, breeding strains of laboratory mice with rapid reproductive cycles may expedite orderly data generation, but it may also introduce bias into the subject population that becomes embedded in the analysis. Only by assuming regularity can sociologists and ecologists isolate single variables and attempt to describe their effect on a society or ecosystem.

What harm could come from the expectation that all features of life and living systems can be counted and understood? How would society benefit from revisiting the suitability of so strictly applying materialism to predict outcomes for life and living systems? Despite the flaws of the materialist approach, does it not ensure the greatest amount of objectivity? Does it not provide the most benefit to the largest number of people? Why should society question a strictly materialist model of life for social decisionmaking?

Blind adherence to the materialist idea that today’s best mathematical models should always provide the basis for social policy poses several problems. New biases are introduced, or perpetuated, by relying too heavily on materialist approaches. As computers become more powerful, society may be limited to considering variables that can be captured or counted (i.e., digitized or “datafied”) so that they can be modeled mathematically. The drive to digitize all information can force crude approximations of the factors that influence life and living systems. Modern society winds up restricting its interests to data suited to the binary format of current digital computers.

Many human factors may lie outside that format. For example, the quest for greater efficiency will move health care even more toward an exercise in matching diagnostic codes and treatment codes. These codes already drive the system more than responding to its needs. Code-matching naturally follows as the best response in a world where it is possible to handle essentially unlimited amounts of data. Once the framework is established, data definitions become entrenched. Subsequent policy evolution locks in early decisions about what codes to use, what data fields to populate, and what budget factors to consider in conducting cost/benefit analyses. Legacy data definitions drive the governmental and industrial responses, limiting the future range of possible actions.

Unspoken assumptions

When formulating broader social policy, unspoken assumptions abound regarding what constitutes the “greater good.” Here, too, the desired objectives, and the means to reach them, will favor measureable data. The data can be used to advance any number of policy agendas that may objectively reflect the interest of their proponents but remain partial. The drive to quantification favors economic analysis and the necessary valuation of public goods. Conveniently, dollars offer an eminently measurable variable, a common convertible currency that captures the value of livelihoods and lives, playgrounds and prisons, and all things of value to society. Using economic models, the policies of the 1950s and 1960s that presaged civic decline and suburban sprawl offered the most promising solutions to the social engineers and business interests that promoted them at the time.

The materialist approach influences not only how the United States sees itself, but how it sees other societies as well. The notion that aggregate wealth offers the best proxy for measuring social progress is not universal. Other cultures may aspire to a more equitable wealth distribution, greater national prominence, recognized technological prowess, or the exalted glory of God. These social goals remain important to societies around the globe and influence national-level decisionmaking in much of the world. The successes of neoliberalism notwithstanding, seeing the world through a strictly materialist lens may systematically underestimate the importance of the religious and cultural forces that motivate societies.

Perhaps the most troubling consequence of considering the best current modeling efforts as constituting the definitive materialist approach (that is, the rational understanding) is that the tail wags the dog more and more. In a digital age, model results are used to set priorities, and social goals that may hide what is in plain sight. The overwhelming attention to the modeling of climate change serves to diminish the attention paid to other, equal and even greater, environmental concerns such as municipal water systems, childhood disease, and urban air pollution, as well as social concerns such as public safety.

Things easily modeled receive the most attention in the social sphere whether they convey or obscure the relevant scientific parameters. Climate offers a clear case of modeling exercises used to advance political agendas by choosing which data to focus on and how to tweak the (literally) hundreds of parameters in any given model. Whether by design or default, the model tends to vindicate the modeler; for instance, the modeler that selects which natural mechanisms to include and which to neglect when modeling the annual global flux of carbon. Models, and policies to be based on them, ignore the consequences of climate change mitigation strategies, such as costly regressive electricity rates that force even middle-class people to scavenge the forest for fuel, or the benefits of global carbon fertilization. What becomes obscured is the fact that a self-consistent description useful for numerical modeling may not faithfully represent reality, whether physical or social.

Models offer an abstraction, a common basis for dialogue. For example, global initiatives such as the ongoing international activity beginning with the Earth Summit in Rio de Janeiro in 1992 were inspired by and continue to derive their relevance from model results. In trying to describe social and environmental problems, much effort is expended in modeling global inequity or evidence of environmental crisis. The effects of changing consumer attitudes that drive rising living standards and regional political realities such as war and lawlessness typically do not find their way into the analysis. Still, a vast enterprise continues to operate under the assumption that model refinement will always lead to greater accuracy in describing socially dependent natural phenomenon and that such accuracy will lead to better remedies for problems. Such expectations derive from the fact that materialist assumptions go unchallenged.

Adding needed perspective

What can be done? Given the pervasiveness and attractiveness of materialism and its centrality to Western thought, no simple list of policy recommendations can correct for its undue influence. Several steps in how the nation and society treat the results of strictly mathematical descriptions of social phenomena may help put things in better proportion from the public perspective.

One step would be to demand greater transparency in models used as evidence to formulate social policy. Transparency in the assumptions and limits of validity for studies involving large complicated systems would offer government and society a better understanding of the instances where quantitative analysis is and is not appropriate. Such stipulations might be alien not only to those who use models to justify their political agenda, but to scientists trying their best to create self-consistent digital versions of observed phenomena. Such transparency would expose the latent bias and the poor understanding of mechanism manifest in many mathematical descriptions of living (and nonliving) systems. Nature can never be proved wrong, but the errors of those who claim to understand it are legendary.

A further step involves actively incorporating ground-truthing from practitioners, not only from experts, when investigating the effects of proposed changes in public policy. Those with the common sense that is born of experience (such as patient caregivers, field scientists, engineers, and local officials) should be allowed to reclaim a stronger voice in public decisionmaking. Using protocols that treat expert analysis or computer simulations as sacrosanct in all cases should be reexamined.

As in the case for life and living systems, at the thermodynamic ensemble level, the description of physical systems also relies on statistics. The main difference between living and nonliving systems is that in nonliving systems, the units lack volition (i.e., will), a property found in the units that make up living systems. The debate is old, and the contention here is that despite their regularities, humans and human societies make choices. They are choices because they can, and do, defy prediction, even if the choices may seem inevitable, or at least explainable, in hindsight. Should life be modeled to the point of deliberately ridding it of the very drama that makes it dear? Stripping life of its serendipity to fit a model may not only be an assault on the soul; it may simply substitute one type of bias for another.


Iddo Wernick () is research associate in the Program for the Human Environment at The Rockefeller University.