Forum – Summer 2014

Evidence-driven policy

In “Advancing Evidence-Based Policymaking to Solve Social Problems” (Issues, Fall 2013), Jeffrey B. Liebman has written an informative and thoughtful article on the potential contribution of empirical analysis to the formation of social policy. I particularly commend his recognition that society faces uncertainty when making policy choices and his acknowledgment that learning what works requires a willingness to try out policies that may not succeed.

He writes “If the government or a philanthropy funds 10 promising early childhood interventions and only one succeeds, and that one can be scaled nationwide, then the social benefits of the overall initiative will be immense.” He returns to reinforce this theme at the end of the article, writing “What is needed is a decade in which we make enough serious attempts at developing scalable solutions that, even if the majority of them fail, we still emerge with a set of proven solutions that work.”

Unfortunately, much policy analysis does not exhibit the caution that Liebman displays. My recent book Public Policy in an Uncertain World observes that analysts often suffer from incredible certitude. Exact predictions of policy outcomes are common, and expressions of uncertainty are rare. Yet predictions are often fragile, with conclusions resting on critical unsupported assumptions or on leaps of logic. Thus, the certitude that is frequently expressed in policy analysis often is not credible.

A disturbing feature of recent policy analysis is that many researchers overstate the informativeness of randomized experiments. It has become common to use two of the terms in the Liebman article—”evidence-based policymaking” and “rigorous evaluation methods”—as code words for such experiments. Randomized experiments sometimes enable one to draw credible policy-relevant conclusions. However, there has been a lamentable tendency of researchers to stress the strong internal validity of experiments and downplay the fact that they often have weak external validity. (An analysis is said to have internal validity if its findings about the study population are credible. It has external validity if one can credibly extrapolate the findings to the real policy problem of interest.)

5

Another manifestation of incredible certitude is that governments produce precise official forecasts of unknown accuracy. A leading case is Congressional Budget Office scoring of the federal debt implications of pending legislation. Scores are not accompanied by measures of uncertainty, even though legislation often proposes complex changes to federal law, whose budgetary implications must be difficult to foresee.

Why do policy analysts express certitude about policy impacts that, in fact, are rather difficult to assess? A proximate answer is that analysts respond to incentives. The scientific community rewards strong, novel findings. The public takes a similar stance, expecting unequivocal policy recommendations. These incentives make it tempting for researchers to maintain assumptions far stronger than they can persuasively defend, in order to draw strong conclusions.

We would be better off we were to face up to the uncertainties that attend policy formation. Some contentious policy debates stem from our failure to admit what we do not know. Credible analysis would make explicit the range of outcomes that a policy might realistically produce. We would do better to acknowledge that we have much to learn than to act as if we already know the truth.

CHARLES F. MANSKI

Board of Trustees Professor in Economics and Fellow of the Institute for Policy Research

Northwestern University

Evanston, Illinois

[email protected]

Manski is author of Public Policy in an Uncertain World: Analysis and Decisions (Harvard University Press, 2013).

Model behavior

With “When All Models Are Wrong” (Issues, Winter 2014), Andrea Saltelli and Silvio Funtowicz add to a growing literature of guidance on handling scientific evidence for scientists and policymakers; recent examples include Sutherland, Spiegelhalter, and Burgman’s “Policy: Twenty tips for interpreting scientific claims,” and Chris Tyler’s “Top 20 things scientists need to know about policymaking.” Their particular focus on models is timely as complex issues are of necessity being handled through modeling, prone though models and model users are to misuse and misinterpretation.

Saltelli and Funtowicz provide mercifully few (7, more memorable than 20) “rules,” sensibly presented more as guidance and, in their words, as an adjunct to essential critical vigilance. There is one significant omission; a rule 8 should be “Test models against data”! Rule 1 (clarity) is important in enabling others to understand and gain confidence in a model, although it risks leading to oversimplification; models are used because the world is complex. Rule 3 might more kindly be rephrased as “Detect overprecision”; labeling important economic studies such as the Stern review as “pseudoscience” seems harsh. Although studies of this type can be overoptimistic in terms of what can be said about the future, they can also represent an honest best attempt, within the current state of knowledge (hopefully better than guesswork), rather than a truly pseudoscientific attempt to cloak prejudice in scientific language. Perhaps also, the distinction between prediction and forecasting has not been recognized here; more could also have been made of the policy-valuable role of modeling in exploring scenarios. But these comments should not detract from a useful addition to current guidance.

Alice Aycock

Those visiting New York City’s Park Avenue through July 20th will experience a sort of “creative disruption.” Where one would expect to see only the usual mix of cars, tall buildings, and crowded sidewalks, there will also be larger-than-life white paper-like forms that seem to be blowing down the middle of the street, dancing and lurching in the wind. The sight has even slowed the pace of the city’s infamously harried residents, who cannot resist the invitation to stop and enjoy.

Alice Aycock’s series of seven enormous sculptures in painted aluminum and fiberglass is called “Park Avenue Paper Chase” and stretches from 52nd Street to 66th Street. The forms, inspired by spirals, whirlwinds, and spinning tops, are hardly the normal view on a busy city street. According to Aycock, “I tried to visualize the movement of wind energy as it flowed up and down the avenue creating random whirlpools, touching down here and there and sometimes forming dynamic three-dimensional massing of forms. The sculptural assemblages suggest waves, wind turbulence, turbines, and vortexes of energy…. Much of the energy of the city is invisible. It is the energy of thought and ideas colliding and being transmitted outward. The works are the metaphorical visual residue of the energy of New York City.”

Aycock’s work tends to draw from diverse subjects and ideas ranging from art history to scientific concepts (both current and outdated). The pieces in “Park Avenue Paper Chase” visually reference Russian constructivism while being informed by mathematical phenomena as found in wind and wave currents. Far from forming literal theoretical models, Aycock’s sculptures seem to combine seemingly disjointed ideas together intuitively into forms that make visual sense. Form combined with their placement on Park Avenue work together to disorient the viewer, at least temporarily, to capture the imagination and to challenge perceptions.

Aycock’s art career began in the early 1970s and has included installations at the Museum of Modern Art, San Francisco Art Institute, and the Museum of Contemporary Art, Chicago, as well as installations in many public spaces such as Dulles International Airport, the San Francisco Public Library, and John F. Kennedy International Airport.

JD Talasek

Images courtesy of the artist and Galerie Thomas Schulte and Fine Art Partners, Berlin, Germany. Photos by Dave Rittinger.

7

ALICE AYCOCK, Cyclone Twist (Park Avenue Paper Chase), Painted aluminum, 27′ high × 15′ diameter, Edition of 2, 2013. The sculpture is currently installed at 57th Street on Park Avenue.

8

ALICE AYCOCK, Hoop-La (Park Avenue Paper Chase), Painted aluminum and steel, 19′ high × 17′ wide × 24′ long, Edition of 2, 2014. The sculpture is currently installed at 53rd Street on Park Avenue.

It is interesting to consider why such guidance should be necessary at this time. The need emerges from the inadequacies of undergraduate science education, especially in Britain where school and undergraduate courses are so narrowly focused (unlike the continental baccalaureate which at least includes some philosophy). British undergraduates get little training in the philosophy and epistemology of science. We still produce scientists whose conceptions of “fact” and “truth” remain sturdily Logical Positivist, lacking understanding of the provisional, incomplete nature of scientific evidence. Likewise, teaching about the history and sociology of science is unusual. Few learn the skills of accurate scientific communication to nonscientists. These days, science students may learn about industrial applications of science, but few hear about its role in public policy. Many scientists (not just government advisers) appear to misunderstand the relationship between the conclusions they are entitled to draw about real-world problems and the wider issues involved in formulating and testing ideas about how to respond to them. Even respected scientists often put forward purely technocratic “solutions,” betraying ignorance of the social, economic, and ethical dimensions of problems, and thereby devaluing science advice in the eyes of the public and policymakers.

Saltelli and Funtowicz’ helpful checklist contributes to improving this situation, but we need to make radical improvements to the ways we train our young scientists if we are to bridge the science/policy divide more effectively.

MILES PARKER

Centre for Science and Policy

MIKE BITHELL

Department of Geography

University of Cambridge

Cambridge, UK

Wet drones

In “Sea Power in the Robotic Age” (Issues, Winter 2014), Bruce Berkowitz describes an impressive range of features and potential missions for unmanned maritime systems (UMSs). Although he’s rightly concerned with autonomy in UMSs as an ethical and legal issue, most of the global attention has been on autonomy in unmanned aerial vehicles (UAVs). Here’s why we may be focusing on the wrong robots.

The need for autonomy is much more critical for UMSs. UAVs can communicate easily with satellites and ground stations to receive their orders, but it is notoriously difficult to broadcast most communication signals through liquid water. If unmanned underwater vehicles (UUVs), such as robot submarines, need to surface in order to make a communication link, they will give away their position and lose their stealth advantage. Even unmanned surface vehicles (USVs), or robot boats, that already operate above water face greater challenges than UAVs, such as limited line-of-sight control because of a two-dimensional operating plane, heavy marine weather that can interfere with sensing and communications, more obstacles on the water than in the air, and so on.

All this means that there is a compelling need for autonomy in UMSs, more so than in UAVs. And that’s why truly autonomous capabilities will probably emerge first in UMSs. Oceans and seas also are much less active environments than land or air: There are far fewer noncombatants to avoid underwater. Any unknown submarine, for instance, can reasonably be presumed not to be a recreational vehicle operated by an innocent individual. So UMSs don’t need to worry as much about the very difficult issue of distinguishing lawful targets from unlawful ones, unlike the highly dynamic environments in which UAVs and unmanned ground vehicles (UGVs) operate.

Therefore, there are also lower barriers to deploying autonomous systems in the water than in any other battlespace on Earth. Because the marine environment makes up about 70% of Earth’s surface, it makes sense for militaries to develop UMSs. Conflicts are predicted to increase there, for instance, as Arctic ice melts and opens up strategic shipping lanes that nations will compete for.

Of course, UAVs have been getting the lion’s share of global attention. The aftermath images of UAV strikes are violent and visceral. UAVs tend to have sexy/scary names such as Ion Tiger, Banshee, Panther, and Switchblade, while UMSs have more staid and nondescript names such as Seahorse, Scout, Sapphire, and HAUV-3. UUVs also mostly look like standard torpedoes, in contrast to the more foreboding and futuristic (and therefore interesting) profiles of Predator and Reaper UAVs.

For those and other reasons, UMSs have mostly been under the radar in ethics and law. Yet, as Berkowitz suggests, it would benefit both the defense and global communities to address ethics and law issues in this area in advance of an international incident or public outrage—a key lesson from the current backlash against UAVs. Some organizations, such as the Naval Postgraduate School’s CRUSER consortium, are looking at both applications and risk, and we would all do well to support that research.

PATRICK LIN

Visiting Associate Professor

School of Engineering

Stanford University

Stanford, California

Director and Associate Philosophy Professor

Ethics and Emerging Sciences Group

California Polytechnic State University

San Luis Obispo, California

[email protected]

Robots aren’t taking your job

Perhaps a better title for “Anticipating a Luddite Revival” (Issues, Spring 2014) might be “Encouraging a Luddite Revival,” for Stuart Elliot significantly overstates the ability of information technology (IT) innovations to automate work. By arguing that as many as 80% of jobs will be eliminated by technology in as soon as two decades, Elliot is inflaming Luddite opposition.

Elliot does attempt to be scholarly in his methodology to predict the scope of technologically based automation. His review of past issues of IT scholarly journals attempts to understand tech trends, while his analysis of occupation skills data (O-NET) attempts to assess what occupations are amenable to automation.

But his analysis is faulty on several levels. First, to say that a software program might be able to mimic some human work functions (e.g., finding words in a text) is completely different than saying that the software can completely replace a job. Many information-based jobs involve a mix of both routine and nonroutine tasks, and although software-enabled tools might be able to help with routine tasks, they have a much harder time with the nonroutine ones.

Second, many jobs are not information-based but involve personal services, and notwithstanding progress in robotics, we are a long, long way away from robots substituting for humans in this area. Robots are not going to drive the fire truck to your house and put out a fire anytime soon.

10

ALICE AYCOCK, Spin-the-Spin (Park Avenue Paper Chase), Painted aluminum, 18′ high × 15′ wide × 20′ long, Edition of 2, 2014. The sculpture is currently installed at 55th Street on Park Avenue.

Moreover, although it’s easy to say that the middle level O-NET tasks “appear to be roughly comparable to the types of tasks now being described in the research literature,” it’s quite another to give actual examples, other than some frequently cited ones such as software-enabled insurance underwriting. In fact, the problem with virtually all of the “robots are taking our jobs” claims is that they suffer from the fallacy of composition. Proponents look at the jobs that are relatively easy to automate (e.g., travel agents) and assume that: (1) these jobs will all be automated quickly, and (2) all or most jobs fit into this category. Neither is true. We still have over half a million bank tellers (with the Bureau of Labor Statistics predicting an increase in the next 10 years), long after the introduction of ATMs. Moreover, most jobs are actually quite hard to automate, such as maintenance and repair workers, massage therapists, cooks, executives, social workers, nursing home aides, and sales reps, to list just a few.

I am somewhat optimistic that this vision of massive automation may in fact come true perhaps by the end of the century, for it would bring increases in living standards (with no change in unemployment rates). But there is little evidence for Elliot’s claim of “a massive transformation in the labor market over the next few decades.” In fact the odds are much higher that U.S. labor productivity growth will clock in well below 3% per year (the highest rate of productivity the United States ever achieved).

ROBERT ATKINSON

President

Information Technology and Innovation Foundation

Washington, DC

[email protected]

Climate change on the right

In Washington, every cause becomes a conduit for special-interest solicitation. Causes that demand greater transfers of wealth and power attract more special interests. When these believers of convenience successfully append themselves to the original cause, it compounds and extends the political support. When it comes to loading up a bill this way, existential causes are the best of all and rightfully should be viewed with greatest skepticism. As Steven E. Hayward notes in “Conservatism and Climate Science” (Issues, Spring 2014), the Waxman-Markey bill was a classic example of special-interest politics run amok.

So conservatives are less skeptical about science than they are about scientific justifications for wealth transfers and losses of liberty. Indeed, Yale professor Dan Kahan found, to his surprise, that self-identified Tea Party members scored better than the population average on a standard test of scientific literacy. Climate policy right-fully elicits skepticism from conservatives, although the skepticism is often presented as anti-science.

Climate activists have successfully and thoroughly confused the climate policy debate. They present the argument this way: (1) Carbon dioxide is a greenhouse gas emitted by human activity; (2) human emissions of carbon dioxide will, without question, lead to environmental disasters of unbearable magnitude; and (3) our carbon policy will effectively mitigate these disasters. The implication swallowed by nearly the entire popular press is that point one (which is true) proves points two and three.

In reality, the connections between points one and two and between points two and three are chains made up of very weak links. The science is so unsettled that even the Intergovernmental Panel on Climate Change (IPCC) cannot choose from among the scores of models it uses to project warming. It hardly matters; the accelerating warming trends that all of them predict are not present in the data (in fact the trend has gone flat for 15 years), nor do the data show any increase in extreme weather from the modest warming of the past century. This provokes the IPCC to argue that the models have not been proven wrong (because their projections are so foggy as to include possible decades of cooling) and that with certain assumptions, some of them predict really bad outcomes.

Not wanting to incur trillions of dollars of economic damage based on these models is not anti-science, which brings us to point three.

Virtually everyone agrees that none of the carbon policies offered to date will have more than a trivial impact on world temperature, even if the worst-case scenarios prove true. So the argument for the policies degenerates to a world of tipping points and climate roulette wheels—there is a chance that this small change will occur at a critical tipping point. That is, the trillions we spend might remove the straw that would break the back of the camel carrying the most valuable cargo. With any other straw or any other camel there would be no impact.

So however unscientific it may seem in the contrived all-or-none climate debate, conservatives are on solid ground to be skeptical.

DAVID W. KREUTZER

Research Fellow in Energy Economics and Climate Change

The Heritage Foundation

Washington, DC

[email protected]

Steven E. Hayward claims that the best framework for addressing large-scale disruptions, including climate change, is building adaptive resiliency. If so, why does he not present some examples of what he has in mind, after dismissing building seawalls, moving elsewhere, or installing more air conditioners as defeatist? What is truly defeatist is prioritizing adaptation over prevention, i.e., the reduction of greenhouse gas emissions.

Others concerned with climate change have a different view. As economist William Nordhaus has pointed out (The Climate Casino, Yale University Press, 2013), in areas heavily managed by humans, such as health care and agriculture, adaptation can be effective and is necessary, but some of the most serious dangers, such as ocean acidification and losses of biodiversity, are unmanageable and require mitigation of emissions if humanity is to avoid catastrophe. This two-pronged response combines cutting back emissions with reactively adapting to those we fail to cut back.

Hayward does admit that our capacity to respond to likely “tipping points” is doubtful. Why then does he not see that mitigation is vital and must be pursued far more vigorously than in the past? Nordhaus has estimated that the cost of not exceeding a temperature increase of 2°C might be 1 to 2% of world income if worldwide cooperation could be assured. Surely that is not too high a price for insuring the continuance of human society as we know it!

11

ALICE AYCOCK, Twin Vortexes (Park Avenue Paper Chase), Painted aluminum, 12′ high × 12′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 54th Street on Park Avenue.

12

ALICE AYCOCK, Maelstrom (Park Avenue Paper Chase), Painted aluminum 12′ high × 16′ wide × 67′ long, Edition of 2, 2014. The sculpture is currently installed between 52nd and 53rd Streets on Park Avenue. Detail opposite.

Hayward states that “Conservative skepticism is less about science per se than its claims to usefulness in the policy realm.” But climate change is a policy issue that science has placed before the nations of the world, and science clearly has a useful role in the policy response, both through the technologies of emissions control and by adaptive agriculture and public health measures. To rely chiefly on “adaptive resiliency” and not have a major role for emissions control is to tie one hand behind one’s back.

EVILLE GORHAM

Regents’ Professor of Ecology Emeritus

University of Minnesota

Minneapolis, Minnesota

Steven E. Hayward should be commended for his thoughtful article, in which he explains why political conservatives do not want to confront the challenge of climate change. Nevertheless, the article did not increase my sympathy for the conservative position, and I would like to explain why.

Hayward begins by explaining why appeals to scientific authority alienate conservatives. Science is not an endeavor that anyone must accept on the word of authority. People should feel free to examine and question scientific work and results. But it doesn’t make sense to criticize science without making an effort to thoroughly understand the science first: the hypotheses together with the experiments that attempt to prove them. What too many conservatives do is deny the science out of hand without understanding it well, dismissing it because of a few superficial objections. I read of one skeptic who dismissed global warming because water vapor is a more powerful greenhouse gas than carbon dioxide. That’s true, but someone who thinks through the argument will understand why that doesn’t make carbon dioxide emissions less of a problem. Climate change is a challenge that we may not agree on how to confront, but that doesn’t excuse any of us from thinking it through carefully.

Hayward points out that “the climate enterprise is the largest crossroads of physical and social science ever contemplated.” That may be true, but conservatives don’t separate the two, and they should. If the science is wrong, they need to explain how the data is flawed, or the theory has not taken into account all the variables, or the statistical analysis is incorrect, or that the data admits of more than one interpretation. If the policy prescriptions are wrong, then they need to explain why these prescriptions will not obtain the results we seek or how they will cost more than the benefits they will provide. Then they need to come up with better alternatives. But too many conservatives don’t separate the science from policy, they conflate the two together. They accuse the scientists of being liberals, and then they won’t consider either the science or the policy. That’s just wrong.

Hayward further explains that conservatives “doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power.” I agree with that, but that shouldn’t stop us from trying to prevent serious problems. Hayward’s statement is a powerful argument for caution, but policy often has unintended consequences, and when we’re faced with a threat, we act. We didn’t understand all consequences of entering World War II, building the atomic bomb, passing the Civil Rights Act, inventing Social Security, or going to war in Afghanistan, but we did them because we thought we had to. Then we dealt with the consequences as best we could. Climate change should be no different.

The weakest part of Hayward’s article is his charge that “the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction.” Now I’m not sure that scientists are the monolithic block Hayward makes then out to be (can he point to a poll?). But even if it is true, it is entirely irrelevant. Scientific work always deserves to be evaluated on its own merits, regardless of whatever personal leanings the investigators might have. Good scientific work is objective and verifiable, and if the investigators are allowing their work to be influenced by their personal biases, that should come out in review, especially if many scientific studies of the same phenomenon are being evaluated. The political leanings of the investigators are a very bad reason for ignoring their work.

13

Just a couple of other points. Hayward states that “Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution to be attacked with the typical emissions control policies,” but it is hard to see how the problem of greenhouse gas concentrations in the atmosphere can be resolved any other way. We can’t get a handle on global warming unless we find a way to limit emissions of greenhouse gases (or counterbalance the emissions with sequestration, which will take just as much effort). Emissions control is not just a tactic, it is a central goal, just like fighting terrorism and curing cancer are central goals. We might fail to achieve them, but that shouldn’t happen because of lack of trying. We need to be patient and persevere. If we environmentalists are correct, the evidence will mount, and public opinion will eventually side with us. By beginning to work on emissions control now, we will all be in a better position to move quickly when the political winds shift in our favor.

Hayward’s alternative to an aggressive climate policy is what he calls “building adaptive resiliency,” but he is very vague about what that means. Does he mean that individuals and companies should adapt to climate change on their own, or that governments need to promote resiliency; but if so, how? The point of environmentalists is that even if we are able to adapt to climate change without large loss of life and property, it will be far more expensive then than if we take direct measures to confront the source of the problem—carbon emissions—now. And we really don’t have much time. If the climate scientists are correct, we have only 50 to 100 years before some of the worst effects of climate change start hitting us. Considering the size and complexity of the problem and the degree of cooperation that any serious effort to address climate change will require from all levels of governments, companies, and private individuals, that’s not a lot of time. We had better get moving.

14

ALICE AYCOCK, Waltzing Matilda (Park Avenue Paper Chase), Reinforced fiberglass, 15′ high × 15′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 56th Street on Park Avenue.

Hayward earns our gratitude for helping us better understand how conservatives feel on this important issue. Nevertheless, the conservative movement is full of bright and intelligent people who could be making many valuable contributions and ideas to the climate debate, and they’re not. That’s a real shame.

MICHAEL H. KLEIN

Brooklyn, New York

[email protected]

Does U.S. science still rule?

In “Is U.S. Science in Decline?” (Issues, Spring 2014), Yu Xie offers a glimpse into the plight of early-career scientists. The gravity of the situation cannot be understated. Many young researchers have become increasingly disillusioned and frustrated about their career trajectory because of declining federal support for basic scientific research.

Apprehension among early-career scientists is rooted in the current fiscal environment. In fiscal year 2013, the National Institutes of Health (NIH) funded fewer than 700 grants due to sequestration: the across-the-board spending cuts that remain an albatross for the entire research community. To put this into context, the success rate for grant applications is now one out of six and may worsen if sequestration is not eliminated. This has left many young researchers rethinking their career prospects. In 1980, close to 18% of all principal investigators (PIs) were age 36 and under, and the percentage has fallen to about 3% in recent years. NIH Director Francis Collins, has said that the federal funding climate for research “keeps me awake at night,” and echoed this sentiment at a recent congressional hearing: “I am worried that the current financial squeeze is putting them [early-career scientists] in particular jeopardy in trying to get their labs started, in trying to take on things that are innovative and risky.” Samantha White, a public policy fellow for Research!America, a nonprofit advocacy alliance, sums up her former career path as a researcher in two words: “anxiety-provoking.” She left bench work temporarily to support research in the policy arena, describing to lawmakers the importance of a strong investment in basic research.

The funding squeeze has left scientists with limited resources and many of them, like White, pursuing other avenues. More than half of academic faculty members and PIs say they have turned away promising new researchers since 2010 because of the minimal growth of the federal science agencies’ budgets, and nearly 80% of scientists say they spend more time writing grant applications, according to a survey by the American Society for Biochemistry and Molecular Biology. Collins lamented this fact in a USA Today article: “We are throwing away probably half of the innovative, talented research proposals that the nation’s finest biomedical community has produced,” he said. “Particularly for young scientists, they are now beginning to wonder if they are in the wrong field. We have a serious risk of losing the most important resource that we have, which is this brain trust, the talent and the creative energies of this generation of scientists.”

U.S. Nobel laureates relied on government funding early in their careers to advance research that helped us gain a better understanding of how to treat and prevent deadly diseases. We could be squandering opportunities for the next generation of U.S. laureates if policymakers fail to make a stronger investment in medical research and innovation.

MIKE COBURN

Chief Operating Officer

Research!America

Alexandria, Virginia

www.researchamerica.org

@ResearchAmerica

Chinese aspirations

Junbo Yu’s article “The Politics Behind China’s Quest for Nobel Prizes” (Issues, Spring 2014) tells an interesting story about how China is applying its strategy for winning Olympic gold to science policy. The story might fit well with the Western stereotype of the Communist bureaucrats, but the real politics of it are more complex and nuanced.

First of all, let’s get the story straight. The article refers to a recent “10,000 Talent” program run by the organizational department of the Chinese Communist Party. It is a major talent development program aimed at selecting and supporting domestic talents in various areas, including scientists, young scholars, entrepreneurs, best teachers, and skilled engineers. The six scientists referred to in Yu’s article were among the 277 people who were identified as the first cohort of the talents. Although there were indeed media reports referring to these scientists as candidates to charge for Nobel Prizes, they were quickly dispelled as media hype and misunderstanding by relevant officials. For example, three of the first six scientists were in research areas that have no relevance to Nobel Prizes at all.

The real political issue is how to balance between talent trained overseas and talent trained domestically. In 2008, China initiated a “1,000 Talent” program aimed at attracting highly skilled Chinese living overseas to return to China. It was estimated that between 1978 and 2011, more than 2 million Chinese students went abroad to study and that only 36.5% of them returned. Although the 1,000 Talent program has been successful in attracting outstanding scholars back to China, it has also generated some unintended consequences.

As part of the recruitment package, the program will give each returnee a one million RMB (equivalent to $160,000) settlement payment. Many of them can also get special grants for research and a salary comparable to what they were paid overseas. This preferential treatment has generated some concern and resentment among those who were trained domestically. They have to compete hard for research grants, and their salaries are ridiculously low. In an Internet survey conducted in China by Yale University, many people expressed their support for the government to attract people to come back from overseas, but felt it was unfair to give people benefits based on where they were trained rather than on how they perform.

In response to these criticisms and concerns, the 10,000 Talent program was developed as a way to focus on domestically trained talent. Instead of going through a lengthy selection process, the program tried to integrate various existing talent programs run by various government agencies.

Although these programs might be useful in the short run, the best way to attract and keep talented people is to create an open, fair, and nurturing environment for people who love research, and to pay them adequately so that they can have a decent life. It is simple and doable in China now, and in the long run it will be much more effective than the 1,000 Talent and 10,000 Talent programs.

LAN XUE

Professor and Dean

School of Public Policy and Management

Tsinghua University

Beijing, China

[email protected]

The idea of China winning a Nobel Prize in science may seem like a stretch to many who understand the critical success factors that drive world-class research at the scientific frontier. Although new reforms in the science and technology (S&T) sector have been introduced since September 2012, the Chinese R&D system continues to be beset by many deep-seated organizational and management issues that need to be overcome if real progress is to be possible. Nonetheless, Junbo Yu’s article reminds us that sometimes there is more to the scientific endeavor than just the work of a select number of scientists toiling away in some well-equipped laboratory.

If we take into account the full array of drivers underlying China’s desire to have a native son win one of these prestigious prizes, we must place national will and determination at the top of the key factors that will determine Chinese success. Yu’s analysis helps remind us just how important national prestige and pride are as factors motivating the behavior of the People’s Republic of China’s leaders in terms of investment and the commitment of financial resources. At times, I wonder whether we here in the United States should pay a bit more deference to these normative imperatives. In a world where competition has become more intense and the asymmetries of the past are giving way to greater parity in many S&T fields, becoming excited about the idea of “winning” or forging a sense of “national purpose” may not be as distorted as perhaps suggested in the article. Too many Americans take for granted the nation’s continued dominance in scientific and technological affairs, when all of the signals are pointing in the opposite direction. In sports, we applaud the team that is able to muster the team spirit and determination to carve out a key victory. Why not in S&T?

That said, where the Chinese leadership may have gone astray in its some-what overheated enthusiasm for securing a Chinese Nobel Prize is its failure to recognize that globalization of the innovation process has made the so-called “scientific Lone Ranger” an obsolete idea. Most innovation efforts today are both transnational and collaborative in nature. China’s future success in terms of S&T advancement will be just as dependent on China’s ability to become a more collaborative nation as it will on its own home-grown efforts.

Certainly, strengthening indigenous innovation in China is an appropriate national objective, but as the landscape of global innovation continues to shift away from individual nation-states and more in the direction of cross-border, cross-functional networks of R&D cooperation, the path to the Nobel Prize for China may be in a different direction than China seems to have chosen. Remaining highly globally engaged and firmly embedded in the norms and values that drive successful collaborative outcomes will prove to be a faster path to the Nobel Prize for Chinese scientists than will working largely from a narrow national perspective. And it also may be the best path for raising the stature and enhancing the credibility of the current regime on the international stage.

DENIS SIMON

Senior Adviser for China & Global Affairs

Foundation Professor of Contemporary

Chinese Affairs

Arizona State University

Tempe, Arizona

[email protected]

Junbo Yu raises a number of interesting, but complex, questions about the current state of science, and science policy, in China. As a reflection of a broad cultural nationalism, many Chinese see the quest for Nobel Prizes in science and medicine as a worthy major national project. For a regime seeking to enhance legitimacy through appeals to nationalism, the use of policy tools by the Party/state to promote this quest is understandable. Although understandable, it may also be misguided. China has many bright, productive scientists who, in spite of the problems of China’s research culture noted by Yu, are capable of Nobel-quality work. They will be recognized with prizes sooner or later, but this will result from the qualities of mind and habit of individual researchers, not national strategy.

The focus on Nobel Prizes detracts from broader questions about scientific development in 21st century China involving tensions between principles of scientific universalism and the social and cultural “shaping” of science and technology in the Chinese setting. The rapid enhancement of China’s scientific and technological capabilities in recent years has occurred in a context where many of the internationally accepted norms of scientific practice have not always been observed. Nevertheless, through international benchmarking, serious science planning, centralized resource mobilization, the abundance of scientific labor available for research services, and other factors, much progress has been made by following a distinctive “Chinese way” of scientific and technological development. The sustainability of this Chinese way, however, is now at issue, as is its normative power for others

Over the past three decades, China has faced a challenge of ensuring that policy and institutional design are kept in phase with a rapidly changing innovation system. Overall, policy adjustments and institutional innovations have been quite successful in allowing China to pass through a series of catch-up stages. However, the challenge of moving beyond catch-up now looms large, especially with regard to the development of policies and institutions to support world class basic research, as Yu suggests. Misapprehension in the minds of political leaders and bureaucrats about the nature of research and innovation in the 21st century may also add to the challenge. The common conflation of “science” and “technology” in policy discourse, as seen in the Chinese term keji (best translated as “scitech”), is indicative. So too is the belief that scientific and technological development remains in essence a national project, mainly serving national political needs, including ultimately national pride and Party legitimacy, as Yu points out.

17

ALICE AYCOCK, Twister 12 feet (Park Avenue Paper Chase), Aluminum, 12′ high × 12′ diameter, Unique edition, 2014. The sculpture is currently installed at 66th Street on Park Avenue.

In 2006, China launched its 15-year “Medium to Long-Term Plan for Scientific and Technological Development” (MLP). Over the past year, the Ministry of Science and Technology has been conducting an extensive midterm evaluation of the Plan. At the same time, as recognized in the ambitious reform agenda of the new Xi Jingping government, the need for significant reforms in the nation’s innovation system, largely overlooked in 2006, have become more evident. There is thus a certain disconnect between the significant resource commitments entailed in the launching of the ambitious MLP and the reality that many of the institutions required for the successful implementation of the plan may not be suitable to the task. The fact that many of the policy assumptions about the role of government in the innovation system that prevailed in 2006 seemingly are not shared by the current government suggests that the politics of Chinese science involve much more than the Nobel Prize quest.

RICHARD P. (PETE) SUTTMEIER

Professor of Political Science, Emeritus

University of Oregon

Eugene, Oregon

[email protected]

Although it is intriguing in linking the production of a homegrown Nobel science laureate to the legitimacy of the Chinese Communist Party, Junbo Yu’s piece just recasts what I indicated 10 years ago. In a paper entitled “Chinese Science and the ‘Nobel Prize Complex’,” published in Minerva in 2004, I argued that China’s enthusiasm for a Nobel Prize in science since the turn of the century reflects the motivations of China’s political as well as scientific leadership. But “various measures have failed to bring home those who are of the calibre needed to win the Nobel Prize. Yet, unless this happens, it will be a serious blow to China’s political leadership. …So to win a ‘home-grown’ Nobel Prize becomes a face-saving gesture.” “This Nobel-driven enthusiasm has also become part of China’s resurgent nationalism, as with winning the right to host the Olympics,” an analogy also alluded to by Yu.

In a follow-up, “The Universal Values of Science and China’s Nobel Prize Pursuit,” forthcoming again in Minerva, I point out that in China, “science, including the pursuit of the Nobel Prize, is more a pragmatic means to achieve the ends of the political leadership—the national pride in this case—than an institution laden with values that govern its practices.”

As we know, in rewarding those who confer the “greatest benefit on mankind,” the Nobel Prize in science embodies an appreciation and celebration of not merely breakthroughs, discoveries, and creativity, but a universal set of values that are shared and practiced by scientists regardless of nationality or culture.

These core values of truth-seeking, integrity, intellectual curiosity, the challenging of authority, and above all, freedom of inquiry are shared by scientists all over the world. It is recognition of these values that could lead to the findings that may one day land their finders a Nobel Prize.

China’s embrace of science dates back only to the May Fourth Demonstrations in 1919, when scholars, disillusioned with the direction of the new Chinese republic after the fall of the Qing Dynasty, called for a move away from traditional Chinese culture to Western ideals; or as they termed it, a rejection of Mr. Confucius and the acceptance of Mr. Science and Mr. Democracy.

However, these concepts of science and democracy differed markedly from those advocated in the West and were used primarily as vehicles to attack Confucianism. The science championed during the May Fourth move-ment was celebrated not for its Enlightenment values but for its pragmatism, its usefulness.

Francis Bacon’s maxim that “knowledge is power” ran right through Mao Zedong’s view of science after the founding of the People’s Republic in 1949. Science and technology were considered as integral components of nation-building; leading academics contributed their knowledge for the sole purpose of modernizing industry, agriculture, and national defense.

The notion of saving the nation through science during the Nationalist regime has translated into current Communist government policies of “revitalizing the nation with science, technology, and education” and “strengthening the nation through talent.” A recent report by the innovation-promotion organization Nesta characterized China as “an absorptive state,” adding practical value to existing foreign technologies rather than creating new technologies of its own.

This materialistic emphasis reflects the use of science as a means to a political end to make China powerful and prosperous. Rather than arbitrarily picking possible Nobel Prize winners, the Chinese leadership would do well to apply the core values of science to the nurturing of its next generation of scientists. Only when it abandons cold-blooded pragmatism for a value-driven approach to science can it hope to win a coveted Nobel Prize and ascend to real superpower status.

Also, winning a Nobel Prize is completely different from winning a gold medal at the Olympics. Until the creation of an environment conducive to first-rate research and nurturing talent, which cannot be achieved through top-down planning, mobilization, and concentration of resources (the hall-marks of China’s state-sponsored sports program), this Nobel pursuit will continue to vex the Chinese for many years to come.

CONG CAO

Associate Professor and Reader

School of Contemporary Chinese Studies

University of Nottingham

Nottingham, UK

[email protected]

From the Hill – Summer 2014

Details of administration’s proposed FY2015 budget

Officially released March 4, President Obama’s FY2015 budget makes clear the challenges for R&D support currently posed by the Budget Control Act spending caps. With hardly any additional room available in the discretionary budget above FY 2014 levels, and with three-quarters of the post-sequester spending reductions still in place overall, many agency R&D budgets remain essentially constant. Some R&D areas such as climate research and support for fundamental science that have been featured in past budgets did not make much fiscal headway in this year’s request. Nevertheless, the administration has managed to shift some additional funding to select programs such as renewable energy and energy efficiency, advanced manufacturing, and technology for infrastructure and transportation.

An added twist, however, is the inclusion of $5.3 billion in additional R&D spending above and beyond the current discretionary caps that is part of what the administration calls the Opportunity, Growth, and Security Initiative (OGSI). This extra funding would make a significant difference for science and innovation funding throughout government. Congress, however, has shown little interest in embracing it.

Without the OGSI the president’s proposed FY2015 budget includes a small reduction in R&D funding in constant dollars. Current AAAS estimates place R&D in the president’s request at $136.5 billion (see Table 1). This represents a 0.7% increase above FY 2014 levels but is actually a slight decrease when the 1.7% inflation rate is considered. It also represents a 3.8% increase above FY 2013 post-sequester funding levels, but with inflation the total R&D budget is almost unchanged from FY 2013.

The Department of Defense (DOD) R&D is proposed at $70.8 billion or 0.3% above FY 2014 levels. This is due to boosts in R&D at the National Nuclear Security Administration (NNSA) that offsets cuts in other DOD R&D programs. Nondefense R&D is proposed at $65.7 billion, a 1.2% increase above FY 2014 levels.

Total research funding, which includes basic and applied research, would fall to $65.9 billion, a cut of $1.1 billion or 1.7% below FY 2014 levels, and only about 1.1% above FY 2013 post-sequester levels after inflation. This is in large part due to cuts in defense and National Aeronautics and Space Administration (NASA) research activities, though some NASA research has also been reclassified as development, which pushes the number lower without necessarily reflecting a change in the actual work.

Conversely, development activities would increase by $2.1 billion or 3.2%, due to increases in these activities at DOD, NASA, and the Department of Energy (DOE).

The $56-billion OGSI initiative would include $6.3 billion for R&D, which would mean a 4.6% increase from FY2014.

R&D spending should be understood in the larger context of the federal budget. The discretionary spending (everything except Medicare, Medicaid, and Social Security) share of the budget has shrunk to 30.4% and is projected to reach 24.6% in 2019. R&D outlays as a share of the budget would drop to 3.4%, a 50-year low.

Under the president’s proposal only a few agency R&D budgets, including those at DOE, the U.S. Geological Survey (USGS), the National Institute of Standards and Technology (NIST), and the Department of Transportation (DOT), stay ahead of inflation, but many will be above sequester levels, and total R&D requests increased more than the average for discretionary spending.

TABLE 1. R&D in the FY 2015 budget by agency (budget authority in millions of dollars)

20

Source: OMB R&D data, agency budget justifications, and agency budget documents. Does not include Opportunity, Growth, and Security Initiative funding (see Table II-20). Note: The projected GDP inflation rate between FY 2014 and FY 2015 is 1.7 percent. All figures are rounded to the nearest million. Changes calculated from unrounded figures.

At DOE, the energy efficiency, renewable energy, and grid technology programs are marked for significant increases, as is the Advanced Research Projects Agency-Energy (ARPA-E); the Office of Science is essentially the same; and nuclear and fossil energy technology programs are reduced.

The proposed budget includes an increase of more than 20% for NASA’s Space Technology Directorate, which seeks rapid public-private technology development. Cuts are proposed in development funding for the next-generation crew vehicle and launch system.

Department of Agriculture extramural research would receive a large increase even as the agency’s intramural research funding is trimmed, though significantly more funding for both is contained within the OGSI.

The DOD science & technology budget, which includes basic and applied research, advanced technology development, and medical research funded through the Defense Health Program, would be cut by $1.4 billion or 10.3% below FY 2014 levels. A 57.8% cut in medical research is proposed, but Congress is likely to restore much of this funding, as it has in the past. The Defense Advanced Research Projects Agency is slated for a small increase.

The National Institute of Health (NIH) would continue on a downward course. The president’s request would leave the NIH budget about $4.1 billion in constant dollars or 12.5% below the FY 2004 peak. Some of the few areas seeing increased funding at NIH would include translational science, neuroscience and the BRAIN Initiative, and mental health. The additional OGSI funding would nearly, but not quite, return the NIH budget to pre-sequestration levels.

The apparently large cut in Department of Homeland Security (DHS) R&D funding is primarily explained by the reduction in funding for construction of the National Bio and Agro-Defense Facility, a Biosafety Level 4 facility in Kansas. Other DHS R&D activities would be cut a little, and the Domestic Nuclear Detection Office would receive a funding increase.

One bright note in this constrained fiscal environment is that R&D spending fared better than average in the discretionary budget, Looking ahead, there is much more cause for concern. Unless Congress takes action, the overall discretionary budget will return to sequester levels in FY2016 and remain there for the rest of the decade.

In brief

Editor’s Journal: Telling Stories

KEVIN FINNERAN

“The universe is composed of stories, not of atoms” Muriel Rukeyser wrote in her poem “The Speed of Darkness.” Good stories are not merely the collection of individual events; they are a means of expressing ideas in concrete terms at human scale. They have the ability to accomplish the apparently simple but rarely achieved task of seamlessly linking the general with the specific, of giving ideas flesh and blood.

This edition of Issues includes three articles that use narrative structure to address important science and technology policy topics. They are the product of a program at Arizona State University that was directed by writer and teacher Lee Gutkind and funded by the National Science Foundation. Begun in 2010, the Think, Write, Publish program began by assembling two dozen young writers and scientists/engineers to work in teams to prepare articles that use a narrative approach to engage readers in an S&T topic. Lee organized a training program that included several workshops and opportunities to meet with editors from major magazines and book publishers. Several of the writer/expert teams prepared articles that were published in Issues: Mary Lee Sethi and Adam Briggle on the federal Bioethics Commission, Jennifer Liu and Deborah Gardner on the global dimension of medicine and ethics, Gwen Ottinger and Rachel Zurer on environmental monitoring, and Ross Carper and Sonja Schmid on small modular nuclear reactors.

Encouraged by the enthusiasm for the initial experiment, they decided to do it again. A second cohort, again composed of 12 scholars and 12 writers, was selected in 2013. They participated in two week-long workshops. At the first meeting teams were formed, guest editors and writers offered advice, Lee and his team provided training, and the teams began their work. Six months later the teams returned for a second week-long workshop during which they worked intensively revising and refining the drafts they had prepared. They also received advice from some of the participants from the first cohort.

They learned that policy debates do not lend themselves easily to narrative treatments, that collaborative writing is difficult, that professional writers and scholars approach the task of writing very differently and have sometimes conflicting criteria for good writing. But they persisted, and now we are proud to present three of the articles that emerged from the effort. Additional articles written by the teams can be found at http://thinkwritepublish.org/.

These young authors are trailblazers in the quest to find a way to make the public more informed and more engaged participants in science, technology, and health policy debates. They recognize narrative as a way to ground and humanize discussions that are too often conducted in abstract and erudite terms. We know that the outcomes of these debates have results that are anything but abstract, and that it is essential that people from all corners and levels of society participate. Effective stories that inform and engage readers can be a valuable means of expanding participation in science policy development. If you want to see how, you can begin reading on the next page.

What Fish Oil Pills Are Hiding

DAVID SCHLEIFER

ALISON FAIRBROTHER

One Woman’s Quest to Save the Chesapeake Bay from the Dietary Supplement Industry

Julie Vanderslice thought fish were disgusting. She didn’t like to look at them. She didn’t like to smell them. Julie lived with her mother, Pat, on Cobb Island, a small Maryland community an hour south and a world away from Washington, D.C. Her neighbors practically lived on their boats in warm weather, fishing for stripers in the Chesa-peake Bay or gizzard shad in shallow creeks shaded by sycamores. Julie had grown up on five acres of woodland in Accokeek, Maryland, across the Potomac River from Mount Vernon, George Washington’s plantation home. The Potomac River wetlands in Piscataway Park were a five-minute bike ride away, on land the federal government had kept wild to preserve the view from Washington’s estate. Her four brothers and three sisters kept chickens, guinea pigs, dogs, cats, and a tame raccoon. They went fishing in the Bay as often as they could. But Julie preferred interacting with the natural world from inside, on a comfortable couch in her living room, where she read with the windows open so she could catch the briny smell of the Bay. “No books about anything slimy or smelly, thank you!” she told her family at holidays.

So it was with some playfulness that Pat’s friend Ray showed up on Julie’s doorstep one afternoon in the summer of 2010 to present her with a book called The Most Important Fish in the Sea. Ray was an avid recreational fisherman, who lived ten miles up the coast on one of the countless tiny inlets of the Chesapeake. The Chesapeake Bay has 11,684 miles of shoreline—more than the entire west coast of the United States; the watershed comprises 64,000 square miles.

“It’s about menhaden, small forage fish that grow up in the Chesapeake and migrate along the Atlantic coast. You’ll love it,” he told her, chuckling as he handed over the book. “But seriously, maybe you’ll be moved by it,” he said, his tone changing. “It says that when John Smith came here in the seventeenth century, there were so many menhaden in the Bay that he could catch them with a frying pan.”

Julie shuddered at the image of so many slippery little fish.

“Now the menhaden are vanishing,” Ray said. “I want you to read this book. I want Delegate Murphy to read this book. And I want the two of you to do something about it.”

Julie was the district liaison for Delegate Peter Murphy, a Democrat representing Charles County in the Maryland House of Delegates. She had started working for Murphy in February 2009 as a photographic intern, tasked with documenting his speeches and meetings with constituents. In her early fifties, Julie was older than the average intern. For ten years, she had sold women’s cosmetics and men’s fragrances at a Washington, D.C. branch of Woodward & Lothrop, until the legendary southern department store chain liquidated in 1995. She had moved to Texas to take a job at another department store in Houston, but it hadn’t felt right. Julie was a Marylander. She needed to live by the Chesapeake Bay. Working in local politics reconnected her to her community, and it wasn’t long before Murphy asked her to join his staff full-time. Now, she worked in his office in La Plata, the county seat, and attended events in the delegate’s stead—like the dedication of a new volunteer firehouse on Cobb Island or the La Plata Warriors high school softball games.

Julie picked up the menhaden book one summer afternoon, pretty sure she wouldn’t make it past the first chapter. She examined the cover, which featured a photo of a small silvery fish with a wide, gaping mouth and a distinctive circular mark behind its eye. “This is the most important fish in the sea?” Julie muttered to herself. She settled back into her sofa and sighed. Her mother was out at a church event, probably chattering away with Ray. Connected to the main-land by a narrow steel-girder bridge, Cobb Island was a tiny spit of land less than a mile long where the Potomac River meets the Wicomico. The island’s population was barely over 1,100. What else was there to do? She turned to the first page and began to read.

For the next few days, The Most Important Fish in the Sea followed Julie wherever she went. She read it out on the porch while listening to the gently rolling waters of Neale Sound, which separated Cobb Island from the mainland. She read it in bed, struggling to keep her eyes open so she could fit in just one more chapter. She finished the book one afternoon just as Pat came through the screen door, arms laden with a bag full of groceries. Pat found Julie standing in the middle of the living room, angrily clutching the book. Pat was dumbfounded. “You don’t like to pick crab meat out of a crab!” she said. “You wear water-shoes at the beach! Here you are all worked up over menhaden!”

Menhaden are a critical link in the Atlantic food chain, and the Chesapeake Bay is critical to the fish’s lifecycle. Menhaden eggs hatch year round in the open ocean, and the young fish swim into the Chesa-peake to grow in the warm, brackish waters. Also known colloquially as bunker, pogies, or alewifes, they are the staple food for many commercially important predator fish, including striped bass, bluefish, and weakfish, which are harvested along the coast in a dozen different states, as well as for sharks, dolphins, and blue whales. Ospreys, loons, and seagulls scoop menhaden from the top of the water column, where the fish ball together in tight rust-colored schools. As schools of menhaden swim, they eat tiny plankton and algae. As a result of their diet, menhaden are full of nutrient-rich oils. They are so oily that when ravaged by a school of bluefish, for example, menhaden will leave a sheen of oil in their wake.

Wayne Levin

Imagine seeing what you think is a coral reef, only to realize that there is movement within the shape and that it is actually a massive school of fish. That is what happened to Wayne Levin as he swam in Hawaii’s Kealakekua Bay on his way to photograph dolphins. The fish he encountered were akule, the Hawaiian name for big-eyed scad. In the years that followed he developed a fascination with the beauty and synchronicity of these schools of akule, and he spent a decade capturing them in thousands of photographs.

Akule have been bountiful in Hawaii for centuries. Easy to see when gathering in the shallows, the dense schools form patterns, like unfurling scrolls, then suddenly contract into a vortex before unfurling again and moving on. In his introduction to Akule (2010, Editions Limited), a collection of Levin’s photos, Thomas Farber describes a photo session: “What transpired was a dance, dialogue, or courtship of and with the akule….Sometimes, for instance, he faced away from them, then slowly turned, and instead of moving away the school would…come towards him. Or, as he advanced, the school would open, forming a tunnel for him. Entering, he’d be engulfed in thousands of fish.”

Wayne Levin has photographed numerous aspects of the underwater world: sea life, surfers, canoe paddlers, divers, swimmers, shipwrecks, seascapes, and aquariums. After a decade of photographing fish schools, he turned from sea to sky, and flocks of birds have been his recent subject. His photographs are in the collections of the Museum of Modern Art, New York; the Museum of Photographic Arts, San Diego; The Honolulu Museum of Art; the Hawaii State Foundation on Culture and the Arts, Honolulu; and the Mariners’ Museum, Newport News, Virginia. His work has been published in Aperture, American Photographer, Camera Arts, Day in the Life of Hawaii, Photo Japan, and most recently LensWork. His books include Through a Liquid Mirror (1997, Editions Limited), and Other Oceans (2001, University of Hawaii Press). Visit his website at waynelevinimages.com.

Alana Quinn

levin-1

WAYNE LEVIN, Column of Akule, 2000.

levin-2

WAYNE LEVIN, Filming Akule, 2006.

For hundreds of years, people living along the Atlantic Coast caught menhaden for their oils. Some scholars say the word menhaden likely derives from an Algonquian word for fertilizer. Pre-colonial Native Americans buried whole menhaden in their cornfields to nourish their crops. They may have taught the Pilgrims to do so, too.

The colonists took things a step further. Beginning in the eighteenth century, factories along the East Coast specialized in cooking menhaden in giant vats to separate their nutrient-rich oil from their protein—the former for use as fertilizer and the latter for animal feed. Dozens of menhaden “reduction” factories once dotted the shoreline from Maine to Florida, belching a foul, fishy smell into the air.

Until the middle of the twentieth century, menhaden fishermen hauled thousands of pounds of net by hand from small boats, coordinating their movements with call-and-response songs derived from African-American spirituals. But everything changed in the 1950s with the introduction of hydraulic vacuum pumps, which enabled many millions of menhaden to be sucked out of the ocean each day—so many fish that companies had to purchase carrier ships with giant holds below deck to ferry the menhaden to shore. According to National Oceanic and Atmospheric Administration records, in the past sixty years, the reduction industry has fished 47 billion pounds of menhaden out of the Atlantic and 70 billion pounds out of the Gulf of Mexico.

Reduction factories that couldn’t keep up went out of business, eliminating the factory noises and fishy smells, much to the relief of the growing number of wealthy home-owners purchasing seaside homes. By 2006, every last company had been bought out, consolidated, or pushed out of business—except for a single conglomerate called Omega Protein, which operates a factory in Reedville, a tiny Virginia town halfway up the length of the Chesapeake Bay. A former petroleum company headquartered in Houston and once owned by the Bush family, Omega Protein continues to sell protein-rich fishmeal for aquaculture, animal feed for factory farms, menhaden oil for fertilizer, and purified menhaden oil, which is full of omega-3 fatty acids, as a nutritional supplement. For the majority of the last thirty years, the Reedville port has landed more fish than any other port in the continental United States by volume.

The company also owns two factories on the shores of the Gulf of Mexico, which grind up and process Gulf menhaden, the Atlantic menhaden’s faster-growing cousin. But Hurricane Katrina in 2005, followed by the 2010 Deepwater Horizon oil disaster in the Gulf of Mexico, forced Omega Protein to rely increasingly on Atlantic menhaden to make up for their damaged factories and shortened fishing seasons in the Gulf—much to the dismay of fishermen and residents along the Atlantic coast.

These days, on a normal morning in Reedville, Virginia, a spotter pilot climbs into his plane just after sunrise to scour the Chesapeake and Atlantic coastal waters, searching for reddish-brown splotches of menhaden. When he spots them, the pilot signals to ship captains, who surround the school with a net, draw it close, and vacuum the entire school into the ship’s hold.

Julie Vanderslice had never seen the menhaden boats or spotter planes, but she was horrified by the description of the ocean carnage documented in The Most Important Fish in the Sea. The author, H. Bruce Franklin, is an acclaimed scholar of American history and culture at Rutgers University, who has written treatises on everything from Herman Melville to the Vietnam War. But he is also a former deckhand who fishes several times a week in Raritan Bay, between New Jersey and Staten Island.

Julie was riveted by a passage in which Franklin describes going fishing one day for weakfish in his neighbor’s boat. Weakfish are long, floppy fish that feed lower in the water column than bluefish, which thrash about on top. Franklin’s neighbor angled his boat toward a chaotic flock of gulls screaming and pounding the air with their wings. The birds were diving into the water and fighting off muscular bluefish to be the first to reach a school of menhaden. The two men had a feeling that weakfish would be lurking below the school of menhaden, attempting to pick off fish from the bottom. But before Franklin and his neighbor could reach the school, one of Omega Protein’s ships sped past, set a purse seine around the menhaden, and used a vacuum pump to suck up hundreds of thousands of fish and all the bluefish and weakfish that had been feeding on them. For days afterward, Franklin observed, there were hardly any fish at all in Raritan Bay.

That moment compelled Franklin to uncover the damage Omega Protein was doing up and down the coast. The company’s annual harvest of between a quarter and a half billion pounds of menhaden had effects far beyond depleting the once-plentiful schools of little fish. Scientists and environmental advocates contended that by vacuuming up menhaden for fishmeal and fertilizer, Omega Protein was pulling the linchpin out of the Atlantic ecosystem: starving predator fish, marine mammals, and birds; suffocating sea plants on the ocean floor; and pushing an entire ocean to the brink of collapse. Despite being published by a small environmental press, The Most Important Fish in the Sea was lauded in the Washington Post, The Philadelphia Inquirer, The Baltimore Sun, and the journal Science. The New York Times discussed it on its opinion pages, citing dead zones in the Chesapeake Bay and Long Island Sound where too few menhaden were left to filter algae out of the water.

After finishing the book, Julie couldn’t get menhaden out of her head. She had to get the book into Delegate Murphy’s hands. She bought a second copy, prepared a two-page summary, and plotted her strategy.

Julie didn’t see the delegate every day because she worked in his district office rather than in Annapolis, the country’s oldest state capitol in continuous legislative use. But that summer, Delegate Murphy was campaigning for re-election and was often closer to home. He was scheduled to make an appearance at a local farmer’s market in Waldorf a few weeks after Julie had finished the book. Waldorf was at the northern edge of Murphy’s district, close enough to Washington that the weekly farmers market would be crowded with an evening rush of commuters on their way home from D.C. But in the late afternoon, the delegate’s staff, decked out in yellow Peter Murphy T-shirts, nearly outnumbered the shoppers browsing for flowers and honey.

Delegate Murphy was at ease chatting with neighbors and shaking hands with constituents. He was tall and thin, with salt-and-pepper hair and lively eyes. Julie recognized his trademark campaign uniform: a blue polo shirt tucked neatly into slacks. He had been a science teacher before entering state politics, and he had a deep, calming voice. As a grandfather to two young children, he knew how to captivate a skeptical audience with a good story. Julie recalled the day she first met him, at a sparsely attended town hall meeting at Piccowaxen Middle School. He struck her immediately as a genuine, thoughtful man on the right side of the issues she cared about. Several months later, she heard that Delegate Murphy was speaking at the Democratic Club on Cobb Island and made a point to attend. Afterward, she waited for him in the receiving line. When it was her turn to speak, Julie asked if he was hiring.

levin-3

WAYNE LEVIN, Ring of Akule, 2000.

Just a few short years later, Julie felt comfortable enough with Delegate Murphy to propose a mission. Mustering her courage as a band warmed up at the other end of the market, Julie seized her moment. “Delegate Murphy, you have to read this!” she said, pushing the book into his hands. “There’s this fish called menhaden that you’ve never heard of. One company in Virginia is vacuuming millions of them out of the Chesapeake Bay, taking menhaden out of the mouths of striped bass and osprey and bluefish and dolphins and all the other fish and animals that rely on them for nutrients. This is why recreational fishermen are always complaining about how hungry the striped bass are! This is why our Bay ecosystem is so unhinged! One company is taking away all our menhaden,” she declared. “We have to stop them.”

Delegate Murphy peered at her with a trace of a smile. “I’ll read it, Julie,” he said.

For months afterward, Julie stayed late at the office, reading everything she could find about menhaden. She learned that every state along the Atlantic Coast had banned menhaden fishing in state waters—except Virginia, where Omega Protein’s Reedville plant was based, and North Carolina, where a reduction factory had recently closed. (North Carolina would ban menhaden reduction fishing in 2012.) The largest slice of Omega Protein’s catch came from Virginia’s ocean waters and from the state’s portion of the Chesapeake Bay, preventing those fish from swimming north into Maryland’s section of the Bay and south into the Atlantic to populate the shores of fourteen other states along the coast.

Beyond the Chesapeake, Omega Protein’s Virginia-based fleet could pull as many menhaden as they wanted from federal waters, designated as everything between three and two hundred miles offshore, from Maine to Florida. Virginia was a voting member of the Atlantic States Marine Fisheries Commission (ASMFC), the agency that governs East Coast fisheries. But the ASMFC had never taken any steps to limit the amount of fish Omega Protein could lawfully catch along the Atlantic coast. Virginia’s legislators happened to be flush with campaign contributions from Omega Protein.

Julie clicked through articles on fisherman’s forums and coastal newspapers from every eastern state. She read testimony from citizens who described how the decimation of the menhaden population in the Chesapeake and in federal waters had affected the entire Atlantic seaboard. Bird watchers claimed that seabirds were suffering from lack of menhaden. Recreational fishermen cited scrawny bass and bluefish, and wondered whether they were lacking protein-packed menhaden meals. Biologists cut the stomachs of gamefish and found fewer and fewer menhaden inside. Whale watchers drove their boats farther out to sea in search of blue whales, which used to breach near the shore, surfacing open-mouthed upon oily schools of menhaden. The dead zones in the Chesapeake Bay grew larger, and some environmentalists connected the dots: menhaden were no longer plentiful enough to filter the water as they had in the past. In 2010, the ASMFC estimated that the menhaden population had declined to a record low, and was nearly 90 percent smaller than it had been twenty-five years ago.

Of course, Omega Protein had its own experts on staff, whose estimates better suited the company’s business interests. At a public hearing in Virginia about the menhaden fishery, Omega Protein spotter pilot Cecil Dameron said, “I’ve flown 42,000 miles looking at menhaden…. I’m here to tell you that the menhaden stock is in better shape than it was twenty years ago, thirty years ago. There’s more fish.”

One humid evening at the end of August, Delegate Murphy held a pre-election fundraiser and rally in his backyard, a grassy spot that sloped down toward the Potomac River. Former Senator Paul Sarbanes stopped by, and campaign staffers brought homemade noodle salads, cheeses, and a country ham. With the election less than two months away, the staff was working overtime, but they had hit their fundraising goal for the day. At the end of the event, as constituents headed to their cars, Delegate Murphy found Julie sitting at one of the collapsible tables littered with used napkins and glasses of melting ice. Julie was accustomed to standing for hours when she worked in the department store, but there was something about fundraising that made her feel like putting her feet up.

levin-4

WAYNE LEVIN, Circling Akule, 2000.

levin-5

WAYNE LEVIN, Rainbow Runners Hunting Akule, 2001.

“Great work tonight, Peter,” she said, wearily raising her glass to him. Julie always called him Delegate Murphy in public. But between the two of them, at the end of a long summer afternoon, it was just Peter.

He toasted and sat down beside her. Campaign staffers were clearing wilted chrysanthemums from the tables and stripping off plastic tablecloths. Peter and Julie looked across the lawn at the blue-gray Potomac as the sun began to dip in the sky.

“Listen,” he said. “I think I have an idea for a bill we could do.”

“On?”

“On menhaden.”

Julie put her drink down so quickly it sloshed onto the sticky tablecloth. She leaned forward in disbelief.

“We’ve got to try doing something about this,” Peter said.

Julie put her hand to her mouth and shook her head. “Menhaden reduction fishing has been banned in Maryland since 1931. Omega Protein is in Virginia. How could a bill in Maryland affect fishing there?”

“We don’t have any control over Virginia’s fishing industry, but we can control what’s sold in our state. I got to thinking: what if we introduced a bill that would stop the sale of products made with menhaden?”

“Do you think it would ever pass?” Julie asked.

“If we did a bill, it would first come before the Environmental Matters Committee. I think the chair of the committee would be amenable. At least we can put it out there and let people talk about it.”

Julie was overcome. He didn’t have to tell her how unusual this was. The impetus for new legislation didn’t often come from former interns—or from their fishermen neighbors.

“But I don’t know if we can win this on the environmental issues alone. What about the sport fishermen? Can we get them to come to the hearing?” Peter asked.

Julie began jotting notes on a napkin.

“Can you find out how many tourism dollars Maryland is losing because the striped bass are going hungry?”

“I’ll get in touch with the sport fishermen’s association and see if I can look up the numbers. And I’ll try to find out which companies are distributing products made from menhaden. It’s mostly fertilizer and animal feed. A little of it goes into fish oil pills, too.”

“The funny thing is, my own doctor told me to take fish oil pills a few years ago,” Peter said. He patted Julie’s shoulder and stood to wave to the last of his constituents as they disappeared down the driveway.

Doctors like Peter’s wouldn’t have recommended fish oil if a Danish doctor named Jörn Dyerberg hadn’t taken a trip across Greenland in 1970. Dyerberg and his colleagues, Hans Olaf Bang and Aase Brondum Nielsen, traveled from village to village by dogsled, poking inhabitants with syringes. They were trying to figure out why the Inuit had such a low incidence of heart disease despite eating mostly seal meat and fatty fish. Dyerberg and his team concluded that Inuit blood had a remarkably high concentration of certain types of polyunsaturated fatty acids, a finding that turned heads in the scientific community when it was published in The Lancet in 1971. The researchers argued that those polyunsaturated fatty acids originated in the fish that the Inuit ate and hypothesized that the fatty acids protected against cardiovascular disease. Those polyunsaturated fatty acids eventually came to be known as omega-3 fatty acids.

Other therapeutic properties of fish oil had been recognized long before Dyerberg’s expedition. During World War I, Edward and May Mellanby, a husband-and-wife team of nutrition scientists, found that it cured rickets, a crippling disease that had left generations of European and American children incapacitated, with soft bones, weak joints, and seizures. (The Mellanbys’ research was an improvement on the earlier work of Dr. Francis Glisson of Cambridge University, who, in 1650, advised that children with rickets should be tied up and hung from the ceiling to straighten their crooked limbs and improve their short statures.)

The Mellanbys tested their theories on animals instead of children. In their lab at King’s College for Women in London, in 1914, they raised a litter of puppies on nothing but oat porridge and watched each one come down with rickets. Several daily spoonfuls of cod liver oil reversed the rickets in a matter of weeks. Edward Mellanby was awarded a knighthood for their discovery. Although May had been an equal partner in the research, she wasn’t accorded the equivalent honor. A biochemist at the University of Wisconsin named Elmer McCollum read the Mellanbys’ research and isolated the anti-rachitic substance in the oil, which eventually came to be called vitamin D. McCollum had already isolated vitamin A in cod-liver oil, as well as vitamin B, which he later figured out was, in fact, a group of several substances. McCollum actually preferred the term “accessory food factor” rather than “vitamin.” He initially used letters instead of names because he hadn’t quite figured out the structures of the molecules he had isolated.

levin-6

WAYNE LEVIN, School of Hellers Barracuda, 1999.

Soon, mothers were dosing their children daily with cod liver oil, a practice that continued for decades. Peter Murphy, who grew up in the 1950s, remembered being forced to swallow the stuff. The pale brown liquid stank like rotten fish, and he would struggle not to gag. Oil-filled capsules eventually supplanted the thick, foul liquid, and cheap menhaden replaced dwindling cod as the source of the oil. Meanwhile, following Dyerberg’s research into the Inuit diet, studies proliferated about the effects of omega-3 fatty acids—which originate in algae and travel up the food chain to forage fish like menhaden and on into the predator fish that eat them.

In 2002, the American Heart Association reviewed 119 of these studies and concluded that omega-3s could reduce the incidence of heart attack, stroke, and death in patients with heart disease. The AHA insisted omega-3s probably had no benefit for healthy people and suggested that eating fish, flax, walnuts, or other foods containing omega-3s was “preferable” to taking supplements. They warned that fish and fish oil pills could contain mercury, PCBs, dioxins, and other environmental contaminants. Nonetheless, they cautiously suggested that patients with heart disease “could consider supplements” in consultation with their doctors.

Americans did more than just “consider” supplements. In 2001, sales of fish oil pills were only $100 million. A 2009 Forbes story called fish oil “one supplement that works.” By 2011, sales topped $1.1 billion. Studies piled up suggesting that omega-3s and fish oil could do everything from reducing blood pressure and systemic inflammation to improving cognition, relieving depression, and even helping autistic children. Omega Protein was making most of its money turning menhaden into fertilizer and livestock feed for tasteless tilapia and factory-farmed chicken. But dietary supplements made for better public relations than animal feed. They put a friendlier, human face on the business, a face Peter and Julie were about to meet.

On a warm afternoon in March 2011, the twenty-four members of the Maryland House of Delegates Environmental Matters Committee filed into the legislature and took their seats. Delegate Murphy sat at the front of the room next to H. Bruce Franklin, author of The Most Important Fish in the Sea, who had traveled from New Jersey to testify at the hearing. Julie Vanderslice chose a spot in the packed gallery, with her neighbor Ray, who brought his copy of Franklin’s book in hopes of getting it signed. Julie brought her own copy, which she had bought already signed, but which she hoped Franklin would inscribe with a more personal message.

The Environmental Matters Committee was the first stop for Delegate Murphy’s legislation. The committee would either endorse the bill for review by the full House of Delegates, strike it down immediately, or send the bill limping back to Peter Murphy’s desk for further review—in which case, it might take years for menhaden to receive another audience with Maryland legislators. If the bill made it to the full House of Delegates, however, it might quickly be taken up for a vote before summer recess. If it passed the House, it was on to the Maryland Senate and, finally, to the Governor’s desk for signature before it became law. It could be voted down at any step along the way, and Julie knew there was a real chance the bill would never make it out of committee.

Julie had heard that Omega Protein’s lobbyists had been swarming the Capitol, taking dozens of meetings with delegates, and that the lobbyists had brought Omega Protein’s unionized fishermen with them. There was nothing like the threat of job loss to derail an environmental bill. Julie bit her thumb and surveyed the gallery.

To her right, rows of seats were filled with a few recreational anglers, conservationists, and scientists, whom Delegate Murphy’s legislative aide had invited to the hearing, but Julie didn’t see representatives from any of the region’s environmental organizations, like the Chesapeake Bay Foundation or the League of Conservation Voters. Delegate Murphy had called Julie on a Sunday to ask her to ask those organizations to submit letters in support of the bill. That type of outreach was not part of her job as district liaison, but she was happy to do it. While the organizations did support the bill in writing, none of them sent anyone to the hearing in person.

Instead, the seats were filled with fishermen from Omega Protein, who wore matching yellow shirts and sat quietly while the vice president of their local union, in a pinstripe suit, leaned over a row of chairs and spoke to them in a hushed voice. At the far side of the room, Candy Thomson, outdoors reporter at The Baltimore Sun, began jotting notes into her pad.

“We’re now going to move to House Bill 1142,” said Democratic Delegate Maggie McIntosh, chair of the Environmental Matters Committee.

As Delegate Murphy spoke, Julie shifted nervously in her seat. The legislators looked confused. She thought she saw one of them riffle through the stack of papers in front of her, as if to remind himself what a menhaden was. Julie wondered how many had even bothered to read the bill before the hearing. But Delegate Murphy knew the talking points backward and forward: the menhaden reduction industry had taken 47 billion pounds of menhaden out of the Atlantic Ocean since 1950. Omega Protein landed more fish, pound for pound, than any other operation in the continental United States. There had never been a limit on the amount of menhaden Omega Protein could legally fish using the pumps that vacuumed entire schools from the sea.

“This bill simply comes out and says that we as a state will no longer participate, regardless of the reason, in the decline of this fish,” he told the committee.

After Peter Murphy finished his opening statement, he and Bruce Franklin began taking questions. One of the delegates held up a letter from the Virginia State Senate. “It says that this industry goes back to the nineteenth century and that the plant this bill targets has been in operation for nearly a hundred years and that some employees are fourth-generation menhaden harvesters.” As she spoke, she paged through letters from the union that represented some of those harvesters and a list of products made from menhaden. “I don’t understand why we would interrupt an industry that has this kind of history, that will affect so many people. In this economy, I think this is the wrong time to take such a drastic approach to this issue.”

Delegate Murphy nodded. “We in Maryland, and particularly in Southern Maryland, grew tobacco for a lot longer than a hundred years,” he said, “but when we realized it was the wrong crop, and that it was killing people, we switched over to other alternatives. And we’re doing that to this day. What we’re saying with this is there are alternatives. You don’t have to fish this fish. This particular company, which happens to be in Virginia, does have alternatives to produce the same products.” He continued, “We have a company here in Maryland that produces the same omega-3 proteins and vitamins, and it uses algae. It grows and harvests algae. And that’s a sustainable resource.”

levin-7

WAYNE LEVIN, Amberjacks Under a School of Akule, 2007.

levin-8

WAYNE LEVIN, Great Barracuda Surrounded by Akule, 2002.

Another delegate, his hands clasped in front of him, addressed the chamber. “I’m sympathetic to saving this resource and to managing this resource appropriately,” he said. But, he explained, he been contacted by one of his constituents, a grandmother whose grandson Austin suffered from what she called “a rare life-threatening illness.” Glancing down at his laptop, he began reading a letter from this worried grandmother. “There is a bill due to be discussed regarding the menhaden fish. These fish supply the omega oils so vital to the Omegaven product that supplies children like Austin with necessary fats through their IV lines. Many children would have died due to liver failure from traditional soy-based fats had these omega-3s in these fish not been discovered. Can you please contact someone from the powers that be in the Maryland government and tell them not to put an end to the use of these fish and their life-sustaining oils.” The delegate closed his laptop. “This is a question from one of my constituents on a life-threatening issue. Can one of the experts address that issue?”

Bruce Franklin tried to explain that there are other sources of omega-3 besides menhaden. Delegate Murphy stepped in and offered to amend the bill to exempt pharmaceutical-grade products. But it was too late. Less than an hour after it had begun, the hearing was over. Delegate Murphy withdrew the bill for “summer study” rather than see it voted down—a likely indicator that the bill would not resurface before the legislature anytime soon, if ever. Delegate McIntosh turned to the next bill on the day’s schedule, and Omega Protein’s spokesperson and lead scientist left the gallery, smiling.

Julie turned to Ray, who was sitting beside her, angrily gripping his copy of The Most Important Fish in the Sea. She wanted to console him but felt uncertain how to begin. “Your fishermen buddies seem ready to riot in the streets,” Julie said uncertainly, gesturing at the anglers who were huddled together as they walked stiffly toward the foyer. “That story about the kid who’d die without his menhaden oil—that came out of nowhere.”

She looked again at the text of the bill. “A person may not manufacture, sell, or distribute a product or product component obtained from the reduction of an Atlantic menhaden.” It was exactly the kind of forward-thinking bill Maryland needed, and it would have sent a message to the other Atlantic states that menhaden were important enough to fight for. It had been her first real step toward making policy, but now she felt crushed by the legislature’s complete lack of will to preserve one of Maryland’s most significant natural resources. It seemed to her the delegates had acted without any attempt to understand the magnitude of the problem or the benefits of the proposed solution.

All Julie wanted to do was head back down to Cobb Island, stand on the dock, and feel the evening breeze on her face. Instead, she had to drive into the humid chaos of Washington, D.C., to spend two days sightseeing with her sister and her nephews. All weekend long, as her family traipsed from the Lincoln Memorial to the National Gallery to Ford’s Theater, she thought about what had gone wrong at the hearing. Had she and Delegate Murphy aimed too high with their bill? Did the committee members understand the complexity of the ecosystem that menhaden sustained? Even when the facts and figures are clear, sometimes a good story is too compelling. What politician could choose an oily fish over a sick child?

Barely a year after the Environmental Matters Committee hearing in Annapolis, the luster of fish oil pills began to fade. In 2010, environmental advocates Benson Chiles and Chris Manthey tested for toxic contaminants in fish oil supplements from a variety of manufacturers and found polychlorinated biphenyls, or PCBs, in many of the pills. PCBs, a group of compounds once widely used in coolant fluids and industrial lubricants, were banned in the 1970s because they decreased human liver function, caused skin ailments, and caused liver and bile duct cancers. PCBs don’t easily break down in the environment; they remain in waterways like those that empty into the Chesapeake Bay, where they get absorbed by the algae and plankton eaten by fish like menhaden.

levin-9

WAYNE LEVIN, Pattern of Akule, 2002.

levin-10

WAYNE LEVIN, Akule Tornado, 2000.

The test results led Chiles and Manthey to file a lawsuit, under California’s Proposition 65, that named supplement manufacturers Omega Protein and Solgar as well as retailers like CVS and GNC for failing to provide adequate warnings to consumers that the fish oil pills they were swallowing with their morning coffee contained unsafe levels of PCBs. In February 2012, Chiles and Manthey reached a settlement with some manufacturers and the trade association that represents them, called the Global Organization for EPA and DHA Omega-3s (GOED), which agreed on higher safety standards for contaminants in fish oil pills.

Meanwhile, in July 2012, The New England Journal of Medicine published a study that assessed whether fish oil pills could help prevent cardiovascular disease in people with diabetes. Of the 6,281 diabetics in the study who took the pills, the same number had heart attacks and strokes as those in the placebo group. Nearly the same number died. Were all those fish-scented burps for naught? A Forbes story asked: “Fish oil or snake oil?”

In September 2012, The Journal of the American Medical Association published even worse news. A team of Greek researchers had analyzed every previous study of omega-3 supplements and cardiovascular disease, and found that omega-3 supplementation did not save lives, prevent heart attacks, or prevent strokes. GOED, the fish oil trade association, was predictably displeased. Its executive director told a supplement industry trade journal, “Given the flawed design of this meta-analysis…, GOED disputes the findings and urges consumers to continue taking omega-3 products.” But the scientific evidence was mounting: not only were fish oil pills full of dangerous chemicals, but they probably weren’t doing much to prevent heart disease, either.

Why did these pills look so promising in 2001 and not great by 2012? The American Heart Association had always favored dietary sources of omega-3s, like fish and nuts, over pills. Jackie Bosch, a scientist at McMaster University and an author of The New England Journal of Medicine study, speculated that now that people with diabetes and heart disease take so many other medicines—statins, diuretics, ACE-inhibitors, and handfuls of other pills—the effect of fish oil may be too marginal to show any measurable benefit.

Julie wasn’t surprised when she heard about the lawsuit. She knew menhaden could soak up chemical contaminants in the waterways. She read news reports about the recent studies on fish oil pills with interest and wondered whether they would give her and Delegate Murphy any ammunition for future efforts to limit the sale of menhaden products in their state. Neither had forgotten about the lowly menhaden.

Delegate Murphy had developed the habit of searching the dietary supplements aisle each time he went to the drug-store, turning the heavy bottles of fish oil capsules in his hands and reading the ingredients. None of the bottles ever listed menhaden. Despite the settlement in the California lawsuit, fish oil manufacturers were not required—and are still not required—to label the types of fish included in supplements, making it difficult for consumers to know whether they contained menhaden oil or not. But Delegate Murphy had made it clear he wasn’t ready to take up the menhaden issue again without a reasonable chance of success. Julie didn’t press him on his decision.

Then in December 2012, increasing public pressure about the decline of menhaden finally led to a change. The Atlantic States Marine Fisheries Commission voted to reduce the harvest of menhaden by 20 percent from previous levels, a regulation that would go into effect during the 2013 fishing season. It was the first time any restriction had been placed on the menhaden industry’s operations in the Atlantic, although the cut was far less severe than independent scientists had recommended. To safeguard the menhaden’s ability to spawn without undue pressure from the industry’s pumps and nets, scientists had advised reducing the harvest by 50 to 75 percent of current catch levels. Delegate Murphy and Julie knew 20 percent wasn’t nearly enough to bring the menhaden stocks back up to support the health of the Bay. But it was a start. They liked to think their bill had moved the conversation forward a little bit.

That Christmas, down on Cobb Island, Julie was putting stamps on envelopes for her family’s annual holiday recipe exchange. She addressed one to her brother Jerry in Arkansas. He didn’t usually come back east for the holidays, preferring to fly home in the summer when his sons could fish for croakers off the dock that ran out into the Wicomico River behind Julie’s house. Jerry worked for Tyson Foods, selling chicken to restaurant chains. Julie had asked him once if Tyson fed their chickens with menhaden meal, and Jerry had admitted he wasn’t sure. Whatever the factory-farmed chickens ate, Julie wasn’t taking any chances. After the hearing on the menhaden bill, she became a vegetarian. For Christmas, she was sending her family recipes for eggless egg salad and an easy bean soup.

When she finished sealing the last envelope, Julie pulled on a turtleneck sweater and grabbed her winter coat for the short walk up to the post office. The sky was a pale, dull gray, and it smelled of snow. She had recently read Omega Protein’s latest report to its investors, and as she trudged slowly toward Cobb Island Road, a word from the text popped into her mind. Company executives had repeatedly made the point that Omega Protein was “diversifying.” They had purchased a California-based dietary supplement supplier that sourced pills that didn’t use fish products. They had begun talking about proteins that could be extracted from dairy and turned into nutritional capsules. Could it be that Omega Protein had begun to see the writing on the wall? Maybe they were starting to realize that the menhaden supply was not unlimited—and that advocates like Julie wouldn’t let them take every last one.

As she passed the Cobb Island pier, a few seagulls were circling mesh crab traps that had been abandoned on the dock—traps that brimmed with blue crabs in the summer-time. Julie pulled her coat closer around her against the chill. She thought ahead to the summer months, when the traps would be baited with menhaden and checked every few hours by local families, and the ice cream parlor would open to serve the seasonal tourists. By the end of summer, Omega Protein would be winding down its fishing season, and the company would likely have 20 percent less fish in their industrial-sized cookers than they did the year before. Would that be enough to help the striped bass and the osprey and the humpback whales? Julie wondered. And the thousands of fishermen whose livelihoods depended upon pulling healthy fish from the Chesapeake Bay? And the families up and down the coast who brought those fish home to eat?

Julie had done a lot of waiting in her time. She had waited her whole life to find a job like the one she had with Delegate Murphy. She had waited for the delegate to get excited about the menhaden. When their bill failed, she had waited for the ASMFC to pass regulations protecting the menhaden. Now she would have to wait a little longer to find out whether the ASMFC’s first effort at limiting the fishery would enable the menhaden population to recover. But there are two kinds of waiting, Julie thought. There’s the kind where you have no agency, and then there’s the kind where you are at the edge of your seat, ready to act at a moment’s notice. Julie felt she could act. And so could Ray, Delegate Murphy, Bruce Franklin, and the sport fishermen, who now cared even more about the oily little menhaden. For now, at least until the end of the fishing season, that had to be enough. They would just have to wait and see.

David Schleifer ([email protected]) is a senior research associate at Public Agenda, a nonpartisan, nonprofit research and engagement organization. Alison Fairbrother ([email protected]) is the executive director of the Public Trust Project.

Final Frontier vs. Fruitful Frontier: The Case for Increasing Ocean Exploration

Every year, the federal budget process begins with a White House-issued budget request, which lays out spending priorities for federal programs. From this moment forward, President Obama and his successors should use this opportunity to correct a longstanding misalignment of federal research priorities: excessive spending on space exploration and neglect of ocean studies. The nation should begin transforming the National Oceanic and Atmospheric Administration (NOAA) into a greatly reconstructed, independent, and effective federal agency. In the present fiscal climate of zero-sum budgeting, the additional funding necessary for this agency should be taken from the National Aeronautics and Space Administration (NASA).

The basic reason is that deep space—NASA’s favorite turf—is a distant, hostile, and barren place, the study of which yields few major discoveries and an abundance of overhyped claims. By contrast, the oceans are nearby, and their study is a potential source of discoveries that could prove helpful for addressing a wide range of national concerns from climate change to disease; for reducing energy, mineral, and potable water shortages; for strengthening industry, security, and defenses against natural disasters such as hurricanes and tsunamis; for increasing our knowledge about geological history; and much more. Nevertheless, the funding allocated for NASA in the Consolidated and Further Continuing Appropriations Act for FY 2013 was 3.5 times higher than that allocated for NOAA. Whatever can be said on behalf of a trip to Mars or recent aspirations to revisit the Moon, the same holds many times over for exploring the oceans; some illustrative examples follow. (I stand by my record: In The Moondoggle, published in 1964, I predicted that there was less to be gained in deep space than in near space—the sphere in which communication, navigations, weather, and reconnaissance satellites orbit—and argued for unmanned exploration vehicles and for investment on our planet instead of the Moon.)

Climate

There is wide consensus in the international scientific community that the Earth is warming; that the net effects of this warming are highly negative; and that the main cause of this warming is human actions, among which carbon dioxide emissions play a key role. Hence, curbing these CO2 emissions or mitigating their effects is a major way to avert climate change.

Space exploration advocates are quick to claim that space might solve such problems on Earth. In some ways, they are correct; NASA does make helpful contributions to climate science by way of its monitoring programs, which measure the atmospheric concentrations and emissions of greenhouse gases and a variety of other key variables on the Earth and in the atmosphere. However, there seem to be no viable solutions to climate change that involve space.

By contrast, it is already clear that the oceans offer a plethora of viable solutions to the Earth’s most pressing troubles. For example, scientists have already demonstrated that the oceans serve as a “carbon sink.” The oceans have absorbed almost one-third of anthropogenic CO2 emitted since the advent of the industrial revolution and have the potential to continue absorbing a large share of the CO2 released into the atmosphere. Researchers are exploring a variety of chemical, biological, and physical geoengineering projects to increase the ocean’s capacity to absorb carbon. Additional federal funds should be allotted to determine the feasibility and safety of these projects and then to develop and implement any that are found acceptable.

Iron fertilization or “seeding” of the oceans is perhaps the most well-known of these projects. Just as CO2 is used by plants during photosynthesis, CO2 dissolved in the oceans is absorbed and similarly used by autotrophic algae and other phytoplankton. The process “traps” the carbon in the phytoplankton; when the organism dies, it sinks to the sea floor, sequestering the carbon in the biogenic “ooze” that covers large swaths of the seafloor. However, many areas of the ocean high in the nutrients and sunlight necessary for phytoplankton to thrive lack a mineral vital to the phytoplankton’s survival: iron. Adding iron to the ocean has been shown to trigger phytoplankton blooms, and thus iron fertilization might increase the CO2 that phytoplankton will absorb. Studies note that the location and species of phytoplankton are poorly understood variables that affect the efficiency with which iron fertilization leads to the sequestration of CO2. In other words, the efficiency of iron fertilization could be improved with additional research. Proponents of exploring this option estimate that it could enable us to sequester CO2 at a cost of between $2 and $30/ton—far less than the cost of scrubbing CO2 directly from the air or from power plant smokestacks—$1,000/ton and $50-100/ton, respectively, according to one Stanford study.

Despite these promising findings, there are a number of challenges that prevent us from using the oceans as a major means of combating climate change. First, ocean “sinks” have already absorbed an enormous amount of CO2. It is not known how much more the oceans can actually absorb, because ocean warming seems to be altering the absorptive capacity of the oceans in unpredictable ways. It is further largely unknown how the oceans interact with the nitrogen cycle and other relevant processes.

Second, the impact of CO2 sequestration on marine ecosystems remains underexplored. The Joint Ocean Commission Initiative, which noted in a 2013 report that absorption of CO2 is “acidifying” the oceans, recommended that “the administration and Congress should take actions to measure and assess the emerging threat of ocean acidification, better understand the complex dynamics causing and exacerbating it, work to determine its impact, and develop mechanisms to address the problem.” The Department of Energy specifically calls for greater “understanding of ocean biogeochemistry” and of the likely impact of carbon injection on ocean acidification. Since the mid-18th century, the acidity of the surface of the ocean, measured by the water’s concentration of hydrogen ions, has increased by 30% on average, with negative consequences for mollusks, other calcifying organisms, and the ecosystems they support, according to the Blue Ribbon Panel on Ocean Acidification. Different ecosystems have also been found to exhibit different levels of pH variance, with certain areas such as the California coastline experiencing higher levels of pH variability than elsewhere. The cost worldwide of mollusk-production losses alone could reach $100 billion if acidification is not countered, says Monica Contestabile, an environmental economist and editor of Nature Climate Change. Much remains to be learned about whether and how carbon sequestration methods like iron fertilization could contribute to ocean acidification; it is, however, clearly a crucial subject of study given the dangers of climate change.

Food

Ocean products, particularly fish, are a major source of food for major parts of the world. People now eat four times as much fish, on average, as they did in 1950. The world’s catch of wild fish reached an all-time high of 86.4 million tons in 1996; although it has since declined, the world’s wild marine catch remained 78.9 million tons in 2011. Fish and mollusks provide an “important source of protein for a billion of the poorest people on Earth, and about three billion people get 15 percent or more of their annual protein from the sea,” says Matthew Huelsenbeck, a marine scientist affiliated with the ocean conservation organization Oceana. Fish can be of enormous value to malnourished people because of its high levels of micronutrients such as Vitamin A, Iron, Zinc, Calcium, and healthy fats.

However, many scientists have raised concerns about the ability of wild fish stocks to survive such exploitation. The Food and Agriculture Organization of the United Nations estimated that 28% of fish stocks were overexploited worldwide and a further 3% were depleted in 2008. Other sources estimate that 30% of global fisheries are overexploited or worse. There have been at least four severe documented fishery collapses—in which an entire region’s population of a fish species is overfished to the point of being incapable of replenishing itself, leading to the species’ virtual disappearance from the area—worldwide since 1960, a report from the International Risk Governance Council found. Moreover, many present methods of fishing cause severe environmental damage; for example, the Economist reported that bottom trawling causes up to 15,400 square miles of “dead zone” daily through hypoxia caused by stirring up phosphorus and other sediments.

There are several potential approaches to dealing with overfishing. One is aquaculture. Marine fish cultivated through aquaculture is reported to cost less than other animal proteins and does not consume limited freshwater sources. Furthermore, aquaculture has been a stable source of food from 1970 to 2006; that is, it consistently expanded and was very rarely subject to unexpected shocks. From 1992 to 2006 alone, aquaculture expanded from 21.2 to 66.8 million tons of product.

Although aquaculture is rapidly expanding—more than 60% from 2000 to 2008—and represented more than 40% of global fisheries production in 2006, a number of challenges require attention if aquaculture is to significantly improve worldwide supplies of food. First, scientists have yet to understand the impact of climate change on aquaculture and fishing. Ocean acidification is likely to damage entire ecosystems, and rising temperatures cause marine organisms to migrate away from their original territory or die off entirely. It is important to study the ways that these processes will likely play out and how their effects might be mitigated. Second, there are concerns that aquaculture may harm wild stocks of fish or the ecosystems in which they are raised through overcrowding, excess waste, or disease. This is particularly true where aquaculture is devoted to growing species alien to the region in which they are produced. Third, there are few industry standard operating practices (SOPs) for aquaculture; additional research is needed for developing these SOPs, including types and sources of feed for species cultivated through aquaculture. Finally, in order to produce a stable source of food, researchers must better understand how biodiversity plays a role in preventing the sudden collapse of fisheries and develop best practices for fishing, aquaculture, and reducing bycatch.

On the issue of food, NASA is atypically mum. It does not claim it will feed the world with whatever it finds or plans to grow on Mars, Jupiter, or any other place light years away. The oceans are likely to be of great help.

Energy

NASA and its supporters have long held that its work can help address the Earth’s energy crises. One NASA project calls for developing low-energy nuclear reactors (LENRs) that use weak nuclear force to create energy, but even NASA admits that “we’re still many years away” from large-scale commercial production. Another project envisioned orbiting space-based solar power (SBSP) that would transfer energy wirelessly to Earth. The idea was proposed in the 1960s by then-NASA scientist Peter Glaser and has since been revisited by NASA; from 1995 to 2000, NASA actively investigated the viability of SBSP. Today, the project is no longer actively funded by NASA, and SBSP remains commercially unviable due to the high cost of launching and maintaining satellites and the challenges of wirelessly transmitting energy to Earth.

NASA does not claim it will feed the world with whatever it finds or plans to grow on Mars, Jupiter, or any other place light years away. The oceans are likely to be of great help.

Marine sources of renewable energy, by contrast, rely on technology that is generally advanced; these technologies deserve additional research to make them fully commercially viable. One possible ocean renewable energy source is wave energy conversion, which uses the up-and-down motion of waves to generate electrical energy. Potentially-useable global wave power is estimated to be two terawatts, the equivalent of about 200 large power stations or about 10% of the entire world’s predicted energy demand for 2020 according to the World Ocean Review. In the United States alone, wave energy is estimated to be capable of supplying fully one-third of the country’s energy needs.

A modern wave energy conversion device was made in the 1970s and was known as the Salter’s Duck; it produced electricity at a whopping cost of almost $1/kWh. Since then, wave energy conversion has become vastly more commercially viable. A report from the Department of Energy in 2009 listed nine different designs in pre-commercial development or already installed as pilot projects around the world. As of 2013, as many as 180 companies are reported to be developing wave or tidal energy technologies; one device, the Anaconda, produces electricity at a cost of $0.24/kWh. The United States Department of Energy and the National Renewable Energy Laboratory jointly maintain a website that tracks the average cost/kWh of various energy sources; on average, ocean energy overall must cost about $0.23/kWh to be profitable. Some projects have been more successful; the prototype LIMPET wave energy conversion technology currently operating on the coast of Scotland produces wave energy at the price of $0.07/kWh. For comparison, the average consumer in the United States paid $0.12/kWh in 2011. Additional research could further reduce the costs.

Other options in earlier stages of development include using turbines to capture the energy of ocean currents. The technology is similar to that used by wind energy; water moving through a stationary turbine turns the blades, generating electricity. However, because water is so much denser than air, “for the same surface area, water moving 12 miles per hour exerts the same amount of force as a constant 110 mph wind,” says the Bureau of Ocean Energy Management (BOEM), a division of the Department of the Interior. (Another estimate from a separate BOEM report holds that a 3.5 mph current “has the kinetic energy of winds in excess of [100 mph].”) BOEM further estimates that total worldwide power potential from currents is five terawatts—about a quarter of predicted global energy demand for 2020—and that “capturing just 1/1,000th of the available energy from the Gulf Stream …would supply Florida with 35% of its electrical needs.”

Although these technologies are promising, additional research is needed not only for further development but also to adapt them to regional differences. For instance, ocean wave conversion technology is suitable only in locations in which the waves are of the same sort for which existing technologies were developed and in locations where the waves also generate enough energy to make the endeavor profitable. One study shows that thermohaline circulation—ocean circulation driven by variations in temperature and salinity—varies from area to area, and climate change is likely to alter thermohaline circulation in the future in ways that could affect the use of energy generators that rely on ocean currents. Additional research would help scientists understand how to adapt energy technologies for use in specific environments and how to avoid the potential environmental consequences of their use.

Renewable energy resources are the ocean’s particularly attractive energy product; they contribute much less than coal or natural gas to anthropogenic greenhouse gas emissions. However, it is worth noting that the oceans do hold vast reserves of untapped hydrocarbon fuels. Deep-sea drilling technologies remain immature; although it is possible to use oil rigs in waters of 8,000 to 9,000 feet, greater depths require the use of specially-designed drilling ships that still face significant challenges. Deep-water drilling that takes place in depths of more than 500 feet is the next big frontier for oil and natural-gas production, projected to expand offshore oil production by 18% by 2020. One should expect the development of new technologies that would enable drilling petroleum and natural gas at even greater depths than presently possible and under layers of salt and other barriers.

In addition to developing these technologies, entire other lines of research are needed to either mitigate the side effects of large-scale usage of these technologies or to guarantee that these effects are small. Although it has recently become possible to drill beneath Arctic ice, the technologies are largely untested. Environmentalists fear that ocean turbines could harm fish or marine mammals, and it is feared that wave conversion technologies would disturb ocean floor sediments, impede migration of ocean animals, prevent waves from clearing debris, or harm animals. Demand has pushed countries to develop technologies to drill for oil beneath ice or in the deep sea without much regard for the safety or environmental concerns associated with oil spills. At present, there is no developed method for cleaning up oil spills in the Arctic, a serious problem that requires additional research if Arctic drilling is to commence on a larger scale.

More ocean potential

When large quantities of public funds are invested in a particular research and development project, particularly when the payoff is far from assured, it is common for those responsible for the project to draw attention to the additional benefits—“spinoffs”—generated by the project as a means of adding to its allure. This is particularly true if the project can be shown to improve human health. Thus, NASA has claimed that its space exploration “benefit[ted] pharmaceutical drug development” and assisted in developing a new type of sensor “that provides real-time image recognition capabilities,” that it developed an optics technology in the 1970s that now is used to screen children for vision problems, and that a type of software developed for vibration analysis on the Space Shuttle is now used to “diagnose medical issues.” Similarly, opportunities to identify the “components of the organisms that facilitate increased virulence in space” could in theory—NASA claims—be used on Earth to “pinpoint targets for anti-microbial therapeutics.”

Ocean research, as modest as it is, has already yielded several medical “spinoffs.” The discovery of one species of Japanese black sponge, which produces a substance that successfully blocks division of tumorous cells, led researchers to develop a late-stage breast cancer drug. An expedition near the Bahamas led to the discovery of a bacterium that produces substances that are in the process of being synthesized as antibiotics and anticancer compounds. In addition to the aforementioned cancer fighting compounds, chemicals that combat neuropathic pain, treat asthma and inflammation, and reduce skin irritation have been isolated from marine organisms. One Arctic Sea organism alone produced three antibiotics. Although none of the three ultimately proved pharmaceutically significant, current concerns that strains of bacteria are developing resistance to the “antibiotics of last resort” is a strong reason to increase funding for bioprospecting. Additionally, the blood cells of horseshoe crabs contain a chemical—which is found nowhere else in nature and so far has yet to be synthesized—that can detect bacterial contamination in pharmaceuticals and on the surfaces of surgical implants. Some research indicates that between 10 and 30 percent of horseshoe crabs that have been bled die, and that those that survive are less likely to mate. It would serve for research to indicate the ways these creatures can be better protected. Up to two-thirds of all marine life remains unidentified, with 226,000 eukaryotic species already identified and more than 2,000 species discovered every year, according to Ward Appeltans, a marine biologist at the Intergovernmental Oceanographic Commission of UNESCO.

The discovery of one species of Japanese black sponge, which produces a substance that successfully blocks division of tumorous cells, led researchers to develop a late-stage breast cancer drug.

Contrast these discoveries of new species in the oceans with the frequent claims that space exploration will lead to the discovery of extraterrestrial life. For example, in 2010 NASA announced that it had made discoveries on Mars “that [would] impact the search for evidence of extraterrestrial life” but ultimately admitted that they had “no definitive detection of Martian organics.” The discovery that prompted the initial press release—that NASA had discovered a possible arsenic pathway in metabolism and that thus life was theoretically possible under conditions different than those on Earth—was then thoroughly rebutted by a panel of NASA-selected experts. The comparison with ocean science is especially stark when one considers that oceanographers have already discovered real organisms that rely on chemosyn-thesis—the process of making glucose from water and carbon dioxide by using the energy stored in chemical bonds of inorganic compounds—living near deep sea vents at the bottom of the oceans.

The same is true of the search for mineral resources. NASA talks about the potential for asteroid mining, but it will be far easier to find and recover minerals suspended in ocean waters or beneath the ocean floor. Indeed, resources beneath the ocean floor are already being commercially exploited, whereas there is not a near-term likelihood of commercial asteroid mining.

Another major justification cited by advocates for the pricey missions to Mars and beyond is that “we don’t know” enough about the other planets and the universe in which we live. However, the same can be said of the deep oceans. Actually, we know much more about the Moon and even about Mars than we know about the oceans. Maps of the Moon are already strikingly accurate, and even amateur hobbyists have crafted highly detailed pictures of the Moon—minus the “dark side”—as one set of documents from University College London’s archives seems to demonstrate. By 1967, maps and globes depicting the complete lunar surface were produced. By contrast, about 90% of the world’s oceans had not yet been mapped as of 2005. Furthermore, for years scientists have been fascinated by noises originating at the bottom of the ocean, known creatively as “the Bloop” and “Julia,” among others. And the world’s largest known “waterfall” can be found entirely underwater between Greenland and Iceland, where cold, dense Arctic water from the Greenland Sea drops more than 11,500 feet before reaching the seafloor of the Denmark Strait. Much remains poorly understood about these phenomena, their relevance to the surrounding ecosystem, and the ways in which climate change will affect their continued existence.

In short, there is much that humans have yet to understand about the depths of the oceans, further research into which could yield important insights about Earth’s geological history and the evolution of humans and society. Addressing these questions surpasses the importance of another Mars rover or a space observatory designed to answer highly specific questions of importance mainly to a few dedicated astrophysicists, planetary scientists, and select colleagues.

Leave the people at home

NASA has long favored human exploration, despite the fact that robots have become much more technologically advanced and that their (one-way) travel poses much lower costs and next to no risks compared to human missions. Still, the promotion of human missions continues; in December 2013, NASA announced that it would grow basil, turnips, and Arabidopsis on the Moon to “show that crop plants that ultimately will feed astronauts and moon colonists and all, are also able to grow on the moon.” However, Martin Rees, a professor of cosmology and astrophysics at Cambridge University and a former president of the Royal Society, calls human spaceflight a “waste of money,” pointing out that “the practical case [for human spaceflight] gets weaker and weaker with every advance in robotics and miniaturisation.” Another observer notes that “it is in fact a universal principle of space science—a ‘prime directive,’ as it were—that anything a human being does up there could be done by unmanned machinery for one-thousandth the cost.” The cost of sending humans to Mars is estimated at more than $150 billion. The preference for human missions persists nonetheless, primarily because NASA believes that human spaceflight is more impressive and will garner more public support and taxpayer dollars, despite the fact that most of NASA’s scientific yield to date, Rees shows, has come from the Hubble Space Telescope, the Chandra X-Ray Observatory, the Kepler space observatory, space rovers, and other missions. NASA relentlessly hypes the bravery of the astronauts and the pioneering aspirations of all humanity despite a lack of evidence that these missions engender any more than a brief high for some.

Ocean exploration faces similar temptations. There have been some calls for “aquanauts,” who would explore the ocean much as astronauts explore space, and for the prioritization of human exploration missions. However, relying largely robots and remote-controlled submersibles seems much more economical, nearly as effective at investigating the oceans’ biodiversity, chemistry, and seafloor topography, and endlessly safer than human agents. In short, it is no more reasonable to send aquanauts to explore the seafloor than it is to send astronauts to explore the surface of Mars.

Several space enthusiasts are seriously talking about creating human colonies on the Moon or, eventually, on Mars. In the 1970s, for example, NASA’s Ames Research Center spent tax dollars to design several models of space colonies meant to hold 10,000 people each. Other advocates have suggested that it might be possible to “terra-form” the surface of Mars or other planets to resemble that of Earth by altering the atmospheric conditions, warming the planet, and activating a water cycle. Other space advocates envision using space elevators to ferry large numbers of people and supplies into space in the event of a catastrophic asteroid hitting the Earth. Ocean enthusiasts dream of underwater cities to deal with overpopulation and “natural or man-made disasters that render land-based human life impossible.” The Seasteading Institute, Crescent Hydropolis Resorts, and the League of New Worlds have developed pilot projects to explore the prospect of housing people and scientists under the surface of the ocean. However, these projects are prohibitively expensive and “you can never sever [the surface-water connection] completely,” says Dennis Chamberland, director of one of the groups. NOAA also invested funding in a habitat called Aquarius built in 1986 by the Navy, although it has since abandoned this project.

If anyone wants to use their private funds for such outlier projects, they surely should be free to proceed. However, for public funds, priorities must be set. Much greater emphasis must be placed on preventing global calamities rather than on developing improbable means of housing and saving a few hundred or thousand people by sending them far into space or deep beneath the waves.

Reimagining NOAA

These select illustrative examples should suffice to demonstrate the great promise of intensified ocean research, a heretofore unrealized promise. However, it is far from enough to inject additional funding, which can be taken from NASA if the total federal R&D budget cannot be increased, into ocean science. There must also be an agency with a mandate to envision and lead federal efforts to bolster ocean research and exploration the way that President Kennedy and NASA once led space research and “captured” the Moon.

Much greater emphasis must be placed on preventing global calamities rather than on developing improbable means of housing and saving a few hundred or thousand people by sending them far into space or deep beneath the waves.

For those who are interested in elaborate reports on the deficiencies of existing federal agencies’ attempts to coordinate this research, the Joint Ocean Commission Initiative (JOCI)—the foremost ocean policy group in the United States and the product of the Pew Oceans Commission and the United States Commission on Ocean Policy—provides excellent overviews. These studies and others reflect the tug-of-war that exists among various interest groups and social values. Environmentalists and those concerned about global climate change, the destruction of ocean ecosystems, declines in biodiversity, overfishing, and oil spills clash with commercial groups and states more interested in extracting natural resources from the oceans, in harvesting fish, and utilizing the oceans for tourism. (One observer noted that only 1% of the 139.5 million square miles of the ocean is conserved through formal protections, whereas billons use the oceans “as a ‘supermarket and a sewer.’”) And although these reports illuminate some of the challenges that must be surmounted if the government is to institute a broad, well-funded set of ocean research goals, none of these groups have added significant funds to ocean research, nor have they taken steps to provide NASA-like agency to take the lead in federally-supported ocean science.

NOAA is the obvious candidate, but it has been hampered by a lack of central authority and by the existence of many disparate programs, each of which has its own small group of congressional supporters with parochial interests. The result is that NOAA has many supporters of its distinct little segments but too few supporters of its broad mission. Furthermore, Congress micromanages NOAA’s budget, leaving too little flexibility for the agency to coordinate activities and act on its own priorities.

It is hard to imagine the difficulty of pulling these pieces together—let alone consolidating the bewildering number of projects—under the best of circumstances. Several administrators of NOAA have made significant strides in this regard and should be recognized for their work. However, Congress has saddled the agency with more than 100 ocean-related laws that require the agency to promote what are often narrow and competing interests. Moreover, NOAA is buried in the Department of Commerce, which itself is considered to be one of the weaker cabinet agencies. For this reason, some have suggested that it would be prudent to move NOAA into the Department of the Interior—which already includes the United States Geological Service, the Bureau of Ocean Energy Management, the National Park Service, the U.S. Fish and Wildlife Service, and the Bureau of Safety and Environmental Enforcement—to give NOAA more of a backbone.

Moreover, NOAA is not the only federal agency that deals with the oceans. There are presently ocean-relevant programs in more than 20 federal agencies—including NASA. For instance, the ocean exploration program that investigates deep ocean currents by using satellite technology to measure minute differences in elevation on the surface of the ocean is currently controlled by NASA, and much basic ocean science research has historically been supported by the Navy, which lost much of its interest in the subject since the end of the Cold War. (The Navy does continue to fund some ocean research, but at levels much lower than earlier.) Many of these programs should be consolidated into a Department of Ocean Research and Exploration that would have the authority to do what NOAA has been prevented from doing: namely, direct a well-planned and coordinated ocean research program. Although the National Ocean Council’s interagency coordinating structure is a step in the right direction, it would be much more effective to consolidate authority for managing ocean science research under a new independent agency or a reimagined and strengthened NOAA.

Setting priorities for research and exploration is always needed, but this is especially true in the present age of tight budgets. It is clear that oceans are a little-studied but very promising area for much enhanced exploration. By contrast, NASA’s projects, especially those dedicated to further exploring deep space and to manned missions and stellar colonies, can readily be cut. More than moving a few billion dollars from the faraway planets to the nearby oceans is called for, however. The United States needs an agency that can spearhead a major drive to explore the oceans—an agency that has yet to be envisioned and created.

What’s My (Cell) Line?

What a strange and useful book this is!

It looks like much ado about not much—just three experiments conducted at zoos on cross-species cloning (in banteng, gaur, and African wild-cat). Yet the much-ado is warranted, given the rapid arrival of biotech tools and techniques that may revolutionize conservation with the prospect of precisely targeted genetic rescue for endangered and even extinct species. Carrie Friese’s research was completed before “de-extinction” was declared plausible in 2013, but her analysis applies directly.

First, a note: readers of this review should be aware of two perspectives at work. Friese writes as a sociologist, so expect occasional sentences such as, “Cloned animals are not objects here…. They are ‘figures’ in [Donna] Haraway’s sense of the word, in that they embody ‘material-semiotic nodes or knots in which diverse bodies and meanings coshape one another.’” I write as a proponent of high-tech genetic rescue, being a co-founder of Revive & Restore, a small nonprofit pushing ahead with de-extinction for woolly mammoths and passenger pigeons and with genetic assistance for potentially inbred black-footed ferrets. I’m also the author of a book on ecopragmatism, called Whole Earth Discipline, that Friese quotes approvingly.

Friese is a sharp-eyed researcher. She begins by noting with interest that “in direct contradiction to public enthusiasm surrounding endangered animal cloning, many people in zoos have been rather ambivalent about such technological developments.” Dissecting ambivalence is her joy, I think, because she detects in it revealing indicators of deep debate and the hidden processes by which professions change their mind fundamentally, driven by technological innovation.

The innovation in this case concerns the ability, new in this century, of going beyond same-species cloning (such as with Dolly the sheep) to cross-species cloning. An egg from one species, such as a domestic cow, has its nucleus removed and replaced with the nucleus and nuclear DNA of an endangered species, such as the Javan banteng, a type of wild cow found in Southeast Asia. The egg is grown in vitro to an early-stage embryo and then implanted in the uterus of a cow. When all goes well (it sometimes doesn’t), the pregnancy goes to term, and a new Javan banteng is born. In the case of the banteng, its DNA was drawn from tissue cryopreserved 25 years earlier by San Diego’s Frozen Zoo, in the hope that it could help restore genetic variability to the remaining population of bantengs assumed to be suffering from progressive inbreeding. (At Revive & Restore we are doing something similar with black-footed ferret DNA from the Frozen Zoo.)

Now comes the ambivalence. The cloned “banteng” may have the nuclear DNA of a banteng, but its mitochondrial DNA (a lesser but still critical genetic component found outside of the nucleus and passed on only maternally) comes from the egg of a cow. Does that matter? It sure does to zoos, which see their task as maintaining genetically pure species. Zoos treat cloned males, which can pass along only nuclear DNA to future generations, as valuable “bridges” of pure banteng DNA to the banteng gene pool. But cloned female bantengs, with their baggage of cow mitochondrial DNA ready to be passed to their offspring, are deemed valueless hybrids.

Friese describes this view as “genetic essentialism.” It is a byproduct of the “conservation turn” that zoos took in the 1970s. In this shift, zoos replaced their old cages with immersion displays of a variety of animals looking somewhat as if they were in the wild, and they also took on a newly assumed role as repositories of wildlife gene pools to supplant or enrich, if necessary, populations that are threatened in the wild. (The conservation turn not only saved zoos; it pushed them to new levels of popularity. In the United States, 100 million people a year now visit zoos, wildlife parks, and aquariums.)

But in the 1980s some conservation biologists began moving away from focusing just on species to an expanded concern about whole ecosystems and thus about ecological function. They became somewhat relaxed about species purity. When peregrine falcons died out along the East Coast of the United States, conservationists replaced them with hybrid falcons from elsewhere, and the birds thrived. Inbred Florida panthers were saved with an infusion of DNA from Texas cougars. Coyotes, on their travels from west to east, have been picking up wolf genes, and the wolves have been hybridizing with dogs.

As the costs of DNA sequencing keep coming down, field biologists have been discovering that hybridization is rampant in nature and indeed may be one of the principle mechanisms of evolution, which is said to be speeding up in these turbulent decades. Friese notes that “as an institution, the zoo is particularly concerned with patrolling the boundaries between nature and culture.” Defending against cloned hybridization, they think, is defending nature from culture. But if hybridization is common in nature, then what?

Soon enough, zoos will be confronting the temptation of de-extincted woolly mammoths (and passenger pigeons, great auks, and Carolina parakeets, among others). Those thrilling animals could be huge draws, deeply educational, exemplars of new possibilities for conservation. They will also be, to a varying extent, genomic hybrids—mammoths that are partly Asian elephant, passenger pigeons that are partly band-tailed pigeon, great auks that are partly razorbill, Carolina parakeets that are partly sun parakeet. Should we applaud or turn away in dismay? I think that conservation biologists will look for one primary measure of success: Can the revived animals take up their old ecological role and manage on their own in the wild? If not, they are freaks. If they succeed, welcome back.

Friese has written a valuable chronicle of the interaction of wildlife conservation, zoos, and biotech in the first decade of this century. It is a story whose developments are likely to keep surprising us for at least the rest of this century, and she loves that. Her book ends: “Humans should learn to respond well to the surprises that cloned animals create.”

Stewart Brand ([email protected]) is the president of the Long Now Foundation in Sausalito, California.

Archives – Summer 2014

Twister

To create his self-portrait, Twister, Dan Collins, a professor of intermedia in the Herberger Institute School of Art at Arizona State University (ASU), spun on a turntable while being digitally scanned. The data were recorded in 1995, but he had to wait more than five years before he could find a computer with the ability to do what he wanted. He used a customized computer to generate a model based on the data. Collins initially produced a high-density foam prototype of the sculpture, and later created an edition of three bonded marble versions of the work, one of which is in the collection of ASU’s Art Museum.

96

DAN COLLINS, Twister, 3D laser-scanned figure, Castable bonded marble, 84′′ high, 1995–2012.

Book Review: Climate Perceptions

Reason in a Dark Time: Why the Struggle against Climate Change Failed—and What It Means for Our Future

by Dale Jamieson. Oxford University Press, New York, 260 pp.

Did climate change cause Hurricanes Katrina and Sandy? Does a cold, snowy winter disprove climate change? As Dale Jamieson says in Reason in a Dark Time, “These are bad questions and no answer can be given that is not misleading. It is like asking whether when a baseball player gets a base hit, it is caused by his .350 batting average. One cannot say ‘yes,’ but saying ‘no’ falsely suggests that there is no relationship between his batting average and the base hit.” Analogies such as this are a major strength of this book, which both distills and extends the thoughtful analysis that Jamieson has been providing for well over two decades.

I’ve been following Jamieson’s work since the early 1990s, when a group at Pacific Northwest National Laboratory began to assess the social science literature relevant to climate change. Few scholars outside the physical sciences had addressed climate change explicitly; Jamieson, a philosopher, had. His publications on ethics, moral issues, uncertainty, and public policy laid down important arguments captured in Human Choice and Climate Change, which I co-edited with Steve Rayner in 1998. And the arguments are still current and vitally important as society contemplates the failure of all first-best solutions regarding climate change: an effective global agreement to reduce greenhouse gas emissions, vigorous national policies, adequate transfers of technology and other resources from industrialized to less-industrialized countries, and economic efficiency, among others.

In Reason in a Dark Time, Jamieson works steadfastly through the issues. He lays out the larger picture with energy and clarity. He takes us back to the beginning, with the history of scientific discoveries about the greenhouse effect and its emergence as a policy concern through the 1992 Earth Summit’s spirit of high hopefulness and the gradual unraveling of those high hopes by the time of the 2009 Copenhagen Climate Change Conference. He discusses obstacles to action, from scientific ignorance to organized denial to the limitations of our perceptions and abilities in responding to “the hardest problem.” He details two prominent but inadequate approaches to both characterizing the problem of climate change and prescribing solutions: economics and ethics. And finally, he discusses doable and appropriate responses in this “dark world” that has so far failed to agree on and implement effective actions that adequately reflect the scope of the problem.

Well, you may say, we’ve seen this book before. There are lots of books (and articles, both scholarly and mainstream) that give the history, discuss obstacles, criticize the ways the world has been trying to deal with climate change, and give recommendations. And indeed, Jamieson himself draws on his own lengthy publication record.

But you should read this book for its insights. If you are already knowledgeable about the history of climate science and international negotiations, you might skim this discussion. (It’s a good history, though.) All readers will gain from examining the useful and clear distinctions that Jamieson draws regarding climate skepticism, contrarianism, and denialism. Put simply, he sees that “healthy skepticism” questions evidence and views while not denying them; contrarianism may assert outlandish views but is skeptical of all views, including its own outlandish assertions; and denialism quite simply rejects a widely believed and well-supported claim and tries to explain away the evidence for the claim on the basis of conspiracy, deceit, or some rhetorical appeal to “junk science.” And take a look at the table and related text that depict a useful typology of eight frames of science-related issues that relate to climate change: social progress, economic development and competitiveness, morality and ethics, scientific and technical uncertainty, Pandora’s box/ Frankenstein’s monster/runaway science, public accountability and governance, middle way/alternative path, and conflict and strategy.

Jamieson’s discussions of the “limits of economics” and the “frontiers of ethics” are also useful. Though they tread much-traveled ground, they take a slightly different slant, starting not with the forecast but the reality of climate change. For instance, the discount rate (how economics values costs in the future) has been the subject of endless critiques, but typically with the goal of coming up with the “right” rate. But Jamieson points out that this is a fruitless endeavor, as social values underlie arguments for almost any discount rate. Thus, the discount rate (and other economic tools) is simply inadequate and, moreover, a mere standin for the real discussion about how society should plan for the future.

Similarly, his discussion of ethics points out that “commonsense morality” cannot “provide ethical guidance with some important aspects of climate-changing behavior”—so it’s not surprising that society has failed to act on climate change. The basis for action is not a matter of choosing appropriate values from some eternal ethical and moral menu, but of evolving values that will be relevant to a climate-changed world in which we make choices about how to adapt to climate change and whether to prevent further climate change—oh, and about whether or not to dabble in planet-altering geoengineering. Ethical and moral revolutions have occurred (e.g., capitalism’s elevation of selfishness), and climate ethicists are breaking new ground in connecting and moralizing about emissions-producing activities and climate change.

Although Jamieson’s explorations do not provide an antidote to the gloom of our dark time, readers will find much to think about here.

He clearly rebuts the argument, for example, that individual actions do not matter, asserting that “What we do matters because of its effects on the world, but what we do also matters because of its effects on ourselves.” Expanding on this thought, he says: “In my view we find meaning in our lives in the context of our relationships to humans, other animals, the rest of nature, and the world generally. This involves balancing such goods as self-expression, responsibility to others, joyfulness, commitment, attunement to reality and openness to new (often revelatory) experiences. What this comes to in the conduct of daily life is the priority of process over product, the journey over the destination, and the doing over what is done.” To my mind, this sounds like the good life that includes respect for nature, temperance, mindfulness, and cooperativeness.

Ultimately, Jamieson turns to politics and policy. As the terms prevention, mitigation, adaptation, and geoengineering have become fuzzy at best, he proposes a new classification of responses to climate change: adaptation (to reduce the negative effects of climate change), abatement (to reduce greenhouse gas emissions), mitigation (to reduce concentrations of greenhouse gases in the atmosphere), and solar radiation management (to alter the Earth’s energy balance). I agree with Jamieson that we need all of the first three and also that we need to be very cautious about “the category formerly known as geoengineering.”

Most of all, we need to live in the world as is, with all its diversity of motives and potential actions, not the dream world imagined at the Earth Summit held in 1992 in Rio de Janeiro. Jamieson gives us seven practical priorities for action (yes, they’ve been said before, but not often in the real-world context that he sketches). And he offers three guiding principles (my favorite is “stop arguing about what is optimal and instead focus on doing what is good,” with “good” encompassing both practical and ethical elements).

I do have some quarrels with the book, starting with the title. In its fullest form, it is unnecessarily wordy and gloomy. And as Jamieson does not talk much of “reason” in the book (nor is there even a definition of the contested term that I could find), why is it displayed so prominently?

More substantively, the gloom that Jamieson portrays is sometimes reinforced by statements that seem almost apocalyptic, such as, “While once particular human societies had the power to upset the natural processes that made their lives and cultures possible, now people have the power to alter the fundamental global conditions that permitted human life to evolve and that continue to sustain it. There is little reason to suppose that our systems of governance are up to the tasks of managing such threats.” But people have historically faced threats (war, disease, overpopulation, the Little Ice Age, among others) that likely seemed to them just as serious, so statements such as Jamieson’s invite the backlash that asserts, well, here we still are and better off, too.

Then there is the question of the intended audience, which Jamieson specifies as “my fellow citizens and…those with whom I have discussed these topics over the years.” But the literature reviews and the heavy use of citations seem to target a narrower academic audience. I would hope that people involved in policymaking and other decisionmaking would not be put off by the academic trappings, but I have my doubts.

If the book finds a wide audience, our global conversation about climate change could become more fruitful. Those who do read it will be rewarded with much to think about in the insights, analogies, and accessible discussions of productive pathways into the climate-changed future.

Elizabeth L. Malone is a staff scientist at the Joint Climate Change Research Institute, a project sponsored by Pacific Northwest National Laboratory and the University of Maryland.

Little Cell, Big Science: The Rise (and Fall?) of Yeast Research

NIKI VERMEULEN

MOLLY BAIN

Trying to add another chapter to the long history of yeast studies, scientists at the cutting edge of knowledge confront the painful realities of science funding.

Manchester, the post-industrial heart and hub of north England, is known for football fanaticism, the pop gloom and swoon of The Smiths, a constant drizzle of dreary weather, and—to a select few—the collaborative work churning in an off-white university building in the city’s center. Given the town’s love of the pub, perhaps it’s a kind of karma that the Manchester Centre for Integrative Systems Biology (MCISB) has set up shop here: its researchers are devoted to yeast.

Wayne Aubrey, a post-doc in his early thirties, with a windswept foppish bob and round, kind brown eyes, is one of those special researchers. Early on a spring morning in 2012, Wayne moved swiftly from his bike to a small glass security gate outside the Centre. Outdoorsy and originally from Wales, he’d just returned from a snowboarding trip, trading in the deep white of powder-fresh mountains for the new, off-white five-story university outpost looming in front of him.

The building, called the Manchester Interdisciplinary Biocentre (MIB), features an open-plan design, intended to foster collaboration among biochemists, computer scientists, engineers, mathematicians, and physicists. The building also hosts the Manchester Centre—the MCISB—where Wayne and his colleagues work. Established in 2006 and funded by a competitive grant, the MCISB was intended to run for at least ten years, studying life by creating computer models that represent living organisms, such as yeast. At its height in 2008, the multidisciplinary Centre housed about twelve full-time post-docs from a wide array of scientific fields, all working together—an unusual, innovative approach to science. But in the age of budget cuts and changing priorities, the funding was already running out after only six years, leaving the Centre’s work on yeast, the jobs of Wayne’s colleagues, and Wayne’s own career path hanging in the balance.

Nevertheless, for the moment at least, there was work to be done. Wayne climbed three flights of stairs and grabbed his lab coat. Wayne’s background, like that of most of his colleagues at the Centre, is multidisciplinary: before coming to Manchester, he was involved in building “Adam,” a robot scientist that automates some experiments so human scientists don’t have to conduct and run each one. This special background in both biology and computer science landed Wayne the job in the Manchester Centre.

But on this day, his lab work was that of a classic “wet biologist.”

As he unstacked a pile of petri dishes into a neat line, Wayne reported with a quick smile, “We grow a lot of happy cells.” The lab is like a sterile outgrowth of or inspiration for The Container Store: glass and plastic jars, dishes, bottles, tubes, and flasks, all with properly sized red and blue sidekick lids, live on shelves above rows of lab counters. Tape and labeling stickers protrude somewhat clumsily from various drawers and boxes; they get a lot of use. Small plastic bins, like fish tackle boxes with near translucent tops, sit at every angle along the counters. Almost as a necessary afterthought, even in this multidisciplinary center, computer monitors are pushed in alongside of it all, cables endlessly festooned between shelves and tables.

Wayne picked up a flask of solution, beginning the routine of pouring measured amounts into the dishes’ flat disks. “The trick,” he noted, “is to prevent bubbles, so that the only thing visible in the dishes later is yeast.”

Every lab has its own romances and rhythms, its own ideals and intrigues—and its own ritualistic routines of daily prep.

As one of the organisms often used for biology, yeast has been at the lab bench for centuries. You could say it’s the little black dress of biology labs. Biologists keep it handy because, as a simple, single-cell organism, yeast has proven very functional, versatile, and useful for all sorts of parties—if, that is, by “parties,” you mean methodical and meticulous experiments, each an elaborate community effort.

How yeast became the microbiologist’s best friend has everything to do with another party favorite (and big business): alcohol. Industrialists and governments alike paid early biologists and chemists to tinker with the fermentation process to figure out why beer and wine spoiled so easily. The resulting research tells an important story, not only about the development of modern biology, but about the process of scientific advance itself.

In modern labs, yeast is most often put to work for the secrets it can unlock regarding human health. Through new experiments and computer models, systems biologists are mapping how one cell of yeast functions as a living system. Though we know a lot about the different components that make up a yeast cell—genes, proteins, enzymes, etc.—we do not yet know how these different components interact together to comprise living yeast. The researchers working at the Manchester Centre are studying this living system, trying to understand these interactions and make them visible. Their ultimate goal: to create a computer model of yeast that can show how the different elements interact. Their hope is that this model of yeast will shed light on how more complex systems—a heart, say, or a liver, or even a whole human being—function, fail, survive, and thrive.

But a model of yeast would demand and have to make sense of enormous quantities of data. Next to the petri dishes, Wayne set up pipettes, flasks, and a plastic tray, his behavior easy and measured. But this experiment, complicated and time-sensitive, couldn’t be performed solo. Wayne was waiting for Mara Nardelli, his lab partner and fellow yeast researcher. When she arrived, they would begin a quick-paced dance from dish to dish, loading each with sugar to see exactly how quickly yeast eats it. They hoped the experiment would give them clear numerical data about the rate of yeast’s absorption of sugar. If it were to match other findings, then perhaps it might be useful to Wayne and Mara’s other lab mates—fellow systems biologists who were attempting to formulate yeast and its inner workings into mathematical terms and then translate those terms into visual models.

Devising a computerized model of yeast has been the main goal of the Manchester Centre—but after six years, it seems, it’s still a ways off. Farther off, to be honest, than anyone had really expected. Yeast seemed fairly straightforward: it is a small, simple organism contained completely in one cell. And yet, it has proved surprisingly difficult to model. Turns out, life—even for as “simple” an organism as yeast—is very complex.

It seems odd, Wayne conceded, that after hundreds of years of research, we still don’t know how yeast works. “But,” he countered, “yeast remains the most well understood cell in the world. I know more about yeast than about anything else, and there are probably as many yeast biologists in the world as there are genes in yeast—which is six thousand.” The sheer number of genes helped Wayne underscore yeast’s complexity: “To fully map the interactions, you have to look at the ways in which these 6,000 genes interact, so that means (6000×6001)/2, which is over 18 million potential interactions. You cannot imagine!” And the complexity doesn’t stop there, Wayne continued: “Now you have the interactions between the genes, but you would still need to add the interactions of all the other components of the yeast cell in order to create a full model.” That is, you’d have to take into account the various metabolites and other elements that make up a yeast cell.

Despite the travails of the yeast modelers, the field of systems biology still hopes to model even more complex forms of life to revolutionize medicine: British scientist and systems biology visionary Denis Noble has had a team working on a virtual model of the heart for more than twenty years, while a large national team of German researchers, funded by the German Federal Ministry, is developing a virtual liver model. Yet, if we cannot even understand how a single-cell organism functions as a living system, one might reasonably ask whether we can safely scale up to bigger, more complex organisms. Could we really—as some scientists hope—eventually model a complete human being?

Moreover, will we be able to use these models to improve human health and care? Will systems biology bring us systems medicine? The American systems biologist Leroy Hood, who was recently awarded the prestigious National Medal of Science by President Obama, has called this idea “P4 medicine”—the p’s standing for Predictive, Preventive, Personalized, and Participatory. He imagines that heart and liver models could be “personalized,” meaning that everybody would have his or her own model, based on individual genetic and other biological information. Just as you can now order your own genetic profile (for a price), you would be able to have a model of yourself, which doctors could use to diagnose—or even “predict”—your medical problems. Based on the model, “preventive” measures could also be taken. And it’s not only medical professionals who could help to cure and prevent disease, but the patient, too, could “participate” in this process.

For instance, in the case of heart diseases, a model could more clearly show existing problems or future risks. This could help doctors run specific tests, perform surgery, or prescribe medicine; and patients themselves could also use special electronic devices or mobile phone apps to measure cholesterol, monitor their heartbeat, and adjust eating patterns and physical activity in order to reduce risks. Or a patient could use a model of his or her liver—the organ that digests drugs—to determine what drugs are most effective, what sort of dose to take, and at what time of the day. Healthcare would increasingly become more individually tailored and precise, and people could, in effect, become their own technology-assisted doctors, managing their own health and living longer, healthier lives because of it.

It sounds pretty amazing—and if the systems biologists are right about our ability to build complex models, it could be reality someday. But are they right, or are they overly optimistic? Researchers’ experience with yeast suggests that a personalized model of your liver on ten years of antidepressants and another of your heart recovering from invasive valve repair could be further off than we’d like. They might even be impossible.

It was turning out to be hard enough to model the simple single-cell organism of yeast. But doing so might be the crucial first step, and the yeast researchers at the Centre weren’t ready to give up yet. Far from it. As Wayne finished setting up the last of the petri dishes, Mara walked in. Golden-skinned, good-natured, and outgoing, she was the social center of the MCISB. For a moment, she and Wayne sat at the lab bench, reviewing the preparations for the day’s experiment in a sort of conversational checklist.

Then Mara stood, tucking away her thick Italian curls and looking over the neatly arranged high-tech Tupperware party. She sighed and turned back to Wayne. “Ready?”

People had been using yeast—spooning off its loamy, foamy scum from one bread bowl or wine vat and inserting it in another—for thousands of years before they understood what this seething substance was or what, exactly, it was doing. Hiero-glyphs from ancient Egypt already suggested yeast as an essential sidekick for the baker and brewer, but they didn’t delineate its magic—that people had identified and isolated yeast to make bread rise and grape juice spirited was magic enough. As the great anatomist and evolutionary theory advocate Thomas Henry Huxley declared in an 1871 lecture, “It is highly creditable to the ingenuity of our ancestors that the peculiar property of fermented liquids, in virtue of which they ‘make glad the heart of man,’ seems to have been known in the remotest periods of which we have any record.”

All the different linguistic iterations of yeast—gäscht, gischt, gest, gist, yst, barm, beorm, bären, hefe—refer to the same descriptive action and event: to raise, to rise, to bear up with, as Huxley put it, “‘yeasty’ waves and ‘gusty’ breezes.” This predictable, if chaotic and muddy, pulpy process—fermentation—was also known to purify the original grain down to its liquid essence—its “spirit”—which, as Huxley described it, “possesses a very wonderful influence on the nervous system; so that in small doses it exhilarates, while in larger it stupefies.”

Though beer and wine were staples of everyday living for thousands and thousands of years, wine- and beer-making were tough trades—precisely because what the gift of yeast was, exactly, was not clear. Until about 150 years ago, mass spoilage of both commercial and homemade alcoholic consumables was incredibly common. Imagine your livelihood or daily gratification dependent on your own handcrafted concoctions. Now, imagine stumbling down to your cellar on a damp night to fetch a nip or a barrel for yourself, your neighbors, or the local tavern. Instead you’re assaulted by a putrid smell wafting from half of your wooden drums. You ladle into one of your casks and discover an intensely sour or sulfurous brew. In the meantime, some drink has sloshed onto your floor, and the broth’s so rancid, it’s slick with its own nasty turn. What caused this quick slippage into spoilage? This question enticed many an early scientist to the lab bench—in part because funding was at the ready.

In a 2003 article on yeast research in the journal Microbiology, James A. Barnett explains that because fermentation was so important to daily life and whole economies, scientific investigations of yeast began in the seventeenth century and were formalized in the eighteenth century, by chemists—not “natural historians” (as early biologists were called)—who were originally interested in the fermentation process as a series of chemical reactions.

How yeast became the microbiologist’s best friend has everything to do with another party favorite (and big business): alcohol.

In late eighteenth-century Florence, Giovanni Valentino Fabbroni was part of the first wave of yeast research. Fabbroni—a true Renaissance man who dabbled in politics and electro-chemistry, wrote tomes on farming practices, and helped Italy adapt the metric system—determined that in order for fermentation to begin, yeast must be present. But he also concluded his work by doing something remarkable: Fabbroni categorized yeast as a “vegeto-animal”—something akin to a living organism—responsible for the fermentation process.

Two years later, in 1789 and in France, Antoine Lavoisier focused on fermentation in winemaking, again regarding it as a chemical process. As Barnett explains, “he seem[ed] to be the first person to describe a chemical reaction by means of an equation, writing ‘grape must = carbonic acid + alcohol.’” Lavoisier, who was born into the aristocracy, became a lawyer while pursuing everything from botany to meteorology on the side. At twenty-six, he was elected to the Academy of Sciences, bought part of a law firm specializing in tax collection for the state, and, while working on his own theory of combustion, eventually came to be considered France’s “father of modern chemistry.” The French government, then the world’s top supplier of wine (today, it ranks second, after Italy), needed Lavoisier’s discoveries—and badly, too: France had to stem the literal and figurative spoiling of its top-grossing industry. But as the revolution took hold, Lavoisier’s fame and wealth implicated him as a soldier of the regime. Arrested for his role as a tax collector, Lavoisier was tried and convicted as a traitor and decapitated in 1794. The Italian mathematician and astronomer Joseph-Louis Lagrange publicly mourned: “It took them only an instant to cut off his head, and one hundred years might not suffice to reproduce its like.”

Indeed, Lagrange was onto something: the new government’s leaders were very quickly in want of scientific help for the wine and spirits industries. In 1803, the Institut de France offered up a medal of pure gold for any scientist who could specify the key agent in the fermenting process. Another thirty years passed before the scientific community had much of a clue—and its discovery tore the community apart.

By the 1830s, with the help of new microscope magnification, Friedrich Kützing and Theodor Schwann, both Germans, and Charles Cagniard-Latour, a Frenchman, independently concluded that yeast was responsible for fermenting grains. And much more than that: these yeasts, the scientists nervously hemmed, um, they seemed to be alive.

Cagniard-Latour focused on the shapes of both beer and wine yeasts, describing their cellular bulbous contours as less like chemical substances and more resembling organisms in the vegetable kingdom. Schwann pushed the categorization even further: upon persistent and continued microscopic investigations, he declared that yeast looks like, acts like, and clearly is a member of the fungi family—“without doubt a plant.” He also argued that a yeast’s cell was essentially its body—meaning that each yeast cell was a complete organism, somewhat independent of the other yeast organisms. Kützing, a pharmacist’s assistant with limited formal training, published extensive illustrations of yeast and speculated that different types of yeast fermented differently; his speculation was confirmed three decades later. From their individual lab perches, each of the three scientists concluded the same thing: yeast is not only alive, but it also eats the sugars of grains or grapes, and this digestion, which creates acid and alcohol in the process, is, in effect, fermentation.

This abrupt reframing of fermentation as a feat of biology caused a stir. Some chemist giants in the field, like Justus von Liebig, found it flat out ridiculous. A preeminent chemistry teacher and theorist, von Liebig proclaimed that if yeast was alive, the growth and integrity of all science was at grave risk: “When we examine strictly the arguments by which this vitalist theory of fermentation is supported and defended, we feel ourselves carried back to the infancy of science.” Von Liebig went so far as to co-publish anonymously (with another famous and similarly offended chemist, Friedrich Wöhler) a satirical journal paper in which yeasts were depicted as little animals feasting on sugar and pissing and shitting carbonic acid and alcohol.

Though he himself did little experimental research on yeast and fermentation, von Liebig insisted that the yeasts were just the result of a chemical process. Chemical reactions could perhaps produce yeast, he allowed, but the yeasts themselves could never be alive, nor active, nor the agents of change.

Von Liebig stuck to this story even after Louis Pasteur, another famous chemist, took up yeast study and eventually became the world’s first famous microbiologist because of it.

These long-term investigations into and disciplinary disputes about the nature of yeast reordered the scientific landscape: the borders between chemistry and biology shifted, giving way to a new field, microbiology—the study of the smallest forms of life.

Back in modern Manchester, Mara and Wayne danced a familiar dance. Behind the lab bench, their arms swirled in clocklike precision as they fed the yeast cells a sugar solution in patterned and punctuated time frames, and then quickly pipetted the yeast into small conical PCR tubes.

Soon, Mara held a blue plastic tray of upside-down conical tubes, which she slowly guided into the analysis machine sitting on top of the lab counter. The machine looked like the part of a desktop computer that houses the motherboard, the processor, the hard drive, its fan, and all sorts of drives and ports. It’s the width of at least two of these bulky and boxy system units, and half of it is sheen with tinted windows. Thick cables and a hose streamed from its backside like tentacles.

Biologists have a favorite joke about yeast: like men, it’s only interested in two things—sex and sugar. Wayne explained, “This is because yeast has one membrane protein, or cell surface receptor, that binds sugar and one that binds the pheromone of the other mating type.” Sugar uptake is what Mara and Wayne have been investigating: the big machine scans, measures, and analyzes how and how quickly the yeast has its way with sugar. Results appear on the front screen, translating the experiment’s results into numerical data and graphs. The results for each set of tubes are cast into a graph, showing the pattern of the yeast cells’ sugar uptake over time. Usually, the graphs show the same types of patterns—lines slowly going up or down—with little variance from graph to graph. If great discrepancies in the patterns emerge, then the scientists usually know something went wrong in the experiment. Mara and Wayne and the Centre’s “dry biologists” (those who build mathematical models of yeast with computers) hoped that understanding how yeast regulates its sugar uptake would help them better understand how the cells grow. Yeast cells grow quickly and can be seen as a proxy for human cell generation because processes in both cells are similar. So much about yeast, Wayne explained, is directly applicable to human cells. If we know more about how yeast cells work, we’ll have a better sense of how human cells function or malfunction. Understanding the development and growth of yeast cells can be translated to growth in healthy human cells, as well as in unhealthy human cells—like malignant cancer cells.

Biologists have a favorite joke about yeast: like men, it’s only interested in two things—sex and sugar.

Mara has been at the lab working with yeast for a long time. In many ways, Mara has been with the Centre before it even became the Centre. Fourteen years ago, in 1998, Mara was wooed away from her beloved Italy for a post-doc in Manchester. Even though she moved here reluctantly from Puglia, a particularly sweet spot on the heel of Italy’s boot, she became a Mancunian—an inhabitant of Manchester. Her childhood sweetheart followed her to Manchester, and together they had a son, who himself wound up at the University of Manchester. Mara even wrote a blog about Manchester living for fellow Britain-bound Italians.

Mara’s research background was in biochemistry. She obtained her PhD from the University of Naples Federico II and University of Bari, and then continued in Bari as a Fellow of the Italian National Research Council (CNR), working on gene expression in human and rat tumors. The professor with whom she worked as a post-doc in Manchester, John McCarthy, was one of the key minds behind the MIB, so she’s been with the Manchester Interdisciplinary Biocentre since its start in 2006. Within the MIB, she became part of the Manchester Centre for Integrative Systems Biology (MCISB), which was set up by Professor Douglas Kell, who—also due to Manchester’s tradition in yeast research and its dedication to the new biosciences—won a national funding competition to create a center to make a model of yeast in the computer. The MCISB quickly attracted new professors and researchers. The core group of scientists to work with yeast, comprised largely of twenty- and thirty-somethings, consisted of both “wet” scientists, who experiment with life in the lab, and “dry” scientists, who work behind computers, building and revising models based off the results from the wet scientists’ experiments. Mara supported its teaching program, mentored PhD students, and assisted post-docs like Wayne, helping to conduct experiments in the right way.

The core group of MCISB’s researchers shared, essentially, one large office not far from the lab space. This was intentional; the building’s architects had created a space that would foster innovation and discovery in biology. As Wayne described it, “It’s a very co-ed sort of approach to the building. You encounter and bump into people more frequently because of the layout of the building. You are more eager to go and speak to people, and ask them, you know, how do I do this, how do I use that, or who belongs to this piece of equipment.”

Over the years, instead of toiling away in separate labs under separate professors, the yeast researchers—working in one big room together on one unified project—felt the uniqueness of both their endeavor and community. Thursday afternoons, the team would often go out to Canal Street, in Manchester’s “gay village,” for a couple of beers. Friendships developed. Two of them even fell in love and got married. “We became a big family,” said one of them, and the others all agreed.

Beginning the long wait for their results, Mara and Wayne cleaned up the bench in an easy quiet, gathering up used petri dishes and pipettes, and nodding to each other as they went.

Louis Pasteur liked his lab in Arbois.

Unlike most nineteenth-century laboratories of world-class scientists, the Arbois lab was small and light and simple. Long, fine microscopes were pedestaled on clean, sturdy wooden tables. While working on his lab logs, Pasteur, whose neat beard across his broad face accentuated the stern downward pull of the corners of his mouth, could sit on a bowed-back chair and look out onto pastoral rolling hills, speckled with vineyards. The lab also had the great advantage of being near his family home.

Pasteur was born in Dole, in the east of France, in 1822, but when he was about five, his father moved the family south to Arbois to rent a tannery—a notoriously messy and smelly trade. The area was known for its yellow and straw wines and its perch on the Cuisance River. Pasteur spent his childhood there.

Arbois is where, as a child, Pasteur declared he wanted to be an artist. It’s where he moved back from a Parisian boarding school at the age of sixteen, declaring he was homesick. And it’s where he’d return to spend almost every summer of his adult life. Eventually, Pasteur would bury his mother, father, and three of his children—two of whom died of typhoid fever before they were ten—in Arbois. And it was inArbois—not in his lab at the prestigious École normale supérieure in Paris, nor in the lab at his university post in Lille—where he bought a vineyard and set up a lab to test his initial ideas about wine and its fermentation.

Before Pasteur developed—which is to say, patented and advocated, just like a twenty-first-century entrepreneurial scientist—“pasteurization” as a way of reducing harmful bacteria in foods and beverages, and before he introduced (and campaigned for) his “germ theory of disease,” which led him to develop the rabies vaccine, Pasteur first worked as a chemist on yeast, specifically researching the fermentation process in wine and spirits.

During the Napoleonic wars at the beginning of the nineteenth century, France’s alcohol industry was dangerously imperiled: Lavoisier, the leading fermenter scientist, had been decapitated, and Britain had cut off France’s supply of cane sugar from the West Indies. Not only were French beer-producers and winemakers, who had wheat and grapes aplenty, still struggling with spoiling yields, but now the spirits-makers had no sugar to wring into hard alcohol. So, in a serious fix, France began cultivating sugar from beets instead. This helped, but forty to fifty years later, when Lille had become France’s capital of beet production, spirits-producers and winemakers alike were still struggling with spoilage; nobody in the alcohol industry knew how to contain or control fermentation.

Lille also happened to be the place where Pasteur worked as a chemistry professor. When one of Pasteur’s students introduced his professor to his father, a spirits-man with fermentation woes, Pasteur suddenly had access and funding to get up close and personal with yeast. He began to watch and parse apart its fermentation, quickly concluding in an 1860 paper that the Berliner Theodor Schwann had been correct decades earlier: yeast was a microbe, a bacterium. In short: alive. He also argued that it was yeast that was so essential to fermentation: its “vital activity,” Pasteur argued, caused fermentation to both begin and end.

Yeast has operated a bit like an oracle over the past two hundred-plus years for many a scientist. It didn’t only convert sugars to alcohol; it also converted Pasteur from chemist to biologist.

More specifically, Pasteur became a microbiologist. The resolution of the discipline-wide fight over the nature of yeast—particularly whether or not it was “vital,” that is, “living”—helped produce two new fields: microbiology and biochemistry. It awakened the scientific community to new possibilities and questions: what other kind of life happens on a small scale, and what can be said about the chemistry of life?

Like a living organism, collaboration is a complex system, and in the absence of nourishment—that is, funding—it falls apart and breaks down.

Though Pasteur was catching all sorts of flak over it, by the early 1860s, his fermentation work also caught the attention of an aide to the emperor. The aide was increasingly concerned about the bad rap France’s chief export was accumulating across Europe. If yeast was the key actor in fermenting all alcohol, was it at all related to what most vintners at the time thought damaged and spoiled their wines—what they called l’amer, or “wine disease”? Could it be that yeast was both creator and culprit of this disease?

With a presidential commission at his back (Napoleon III was both the last emperor and first president of France), Pasteur set out on a tour of wineries across France. Though it may have been during this sojourn that Pasteur spun the line “A bottle of wine contains more philosophy than all the books in the world,” a drunken holiday this was not. Pasteur solemnly reported back to the crown: “There may not be a single winery in France, whether rich or poor, where some portions of the wine have not suffered greater or lesser alteration.”

And with this, his initial fieldwork completed, Pasteur set up shop in his favorite winery region, Jura, home to, of course, Arbois.

With his light brown eyes, framed by an alert brow and well-earned bags, Pasteur alternately gazed out the window of his rustic laboratory and then down through one of his long white microscopes. Again and again, he watched yeast cavort with grape juice in its fermentation dance. But he knew there was at least one other big player whose influence he had yet to understand fully: air.

Excluding air from the party and allowing it in methodically, he found that exposing yeast and wine to too much air inevitably invites in airborne bacteria, which break down the alcohol into acid, resulting in vinegar. (With one eye on knowledge and one on practical application, Pasteur quickly passed this information on to the vinegar industry.) Air allowed in too much riffraff. In order to keep wine fine, the event had to remain exclusive—or you needed some kind of keen agent to kill the interlopers systematically. It didn’t take Pasteur too long to identify this discriminating friend: heat. Heating the wine, slowly, to about 120°F would kill the bacteria without destroying the taste of the wine.

Vintners at first found this idea near sacrilege. Many resisted, but after their competitors who adopted the method had bigger, better yields, they quickly complied. In fact, this procedure not only revolutionized winemaking and beer-brewing, saving France’s top export and industry, it was also the beginning of the pasteurization craze and Pasteur’s further work on microorganisms as the germs that transmit infectious disease. Science, profit-making, and improved public health turned out to be mutually reinforcing, each propelling the other forward.

Mara walked over to the analysis machine to check the progress of the experiment. Points and curves had begun to appear on the screen—each yeast cell’s inner workings translated as numbers and lines.

But Mara was not happy. Comparing these results with the results of two previous experiments, she saw that the difference was big—too big. “Something must have gone wrong,” she said.

She beckoned Wayne over, and he quickly agreed. “No, this does not look how it should.”

As Pasteur could attest, those who work in labs, wearing white coats, know that experiments often do not work out. It is certainly not the sign of a bad scientist, but it does make lab work tedious. In a speech he gave in his birth place, Dole, Pasteur spoke of his father’s influence in his own lab-bound life: “You, my dear father, whose life was as hard as your trade, you showed me what patience can accomplish in prolonged efforts. To you I owe my tenacity in daily work.”

Getting things right—isolating a potential discovery, testing it, and retesting it—often requires endless attempts, dogged persistence, and the ability to endure a lot of cloudy progress. Wayne contextualized these results thus: “This happens all the time. There is a lot of uncertainty. [It’s as though] failed experiments do not exist: if experiments don’t work, they’re never published, so you don’t know. So you have to reinvent the wheel yourself and develop your own knowledge about what works and what does not. Molecular biology is not like the combustion engine—where billions of pounds have been spent to understand the influence of every parameter and variable. There are still many unknowns in biology, and established methods do not work in every instance.”

Wayne and Mara were in good company, though, which provided some comfort. Working at the Centre, they were not only a part of a community of like-minded scientists, but also a part of a global network of scientists, all working on the modeling of yeast. In 2007, the MCISB hosted and organized a “yeast jamboree,” a three-day all-nighter—a Woodstock for yeast researchers from around the world. The jamboree resulted in a consensus on a partial model of yeast (focused on its metabolism) and a paper summarizing the jamboree’s findings, which have been cited by fellow systems biology researchers more than three hundred times to date. The “yeast jamboree” was so productive that it inspired another jamboree conference—this one focused on the human metabolic system.

But despite the jamboree’s high-profile success, yeast’s unexpected level of complexity has been a source of frustration for researchers trying to model the whole of it precisely. Three years ago, the Centre’s yeast team started to address this challenge by revising their approach: instead of looking at all of yeast’s genes and determining which proteins each makes and what activity these proteins perform, the team began, first, to identify each activity and, then, research the mechanism behind it. But even with this revised tactic and the doubling down of efforts, a complete model of yeast was not yet done, and time and funding were running out. The promise of extending the grant for another five years was broken when the University, after the global financial crunch, reoriented its priorities. As a result, the funding was almost gone.

“Well,” Mara continued. “These results are clearly not what we are looking for. We have to do it all over again. When do we have time?”

A year later, on a warm evening in June 2013, Wayne was again working behind the same lab bench.

As much as he enjoyed his work, he could imagine doing other things than sitting in a lab until 10 p.m.: “Read a book, sit in the sun, go to the pub,” he shrugged then gave a wee grin. “You know, have a family.”

Wayne was not working on yeast at the moment, however; he was running a series of enzymology experiments on E. coli for a large European project, trying to finish the results in time for a meeting in Amsterdam. The European grant was covering his salary. That was the only reason he still had a job at MCISB. Other post-docs of the Centre were not so lucky. “There was so much expertise,” Wayne reflected. “It was a good group, and now it has become much smaller. Before, it was much more cohesive; it had much more of a team feel about it. Now the group is fragmented…. Everybody is working on different things and in different projects.”

Soon, Wayne would leave Manchester, too, for a lectureship at Aberystwyth University, back home in his seaside Wales. It’s a big deal accomplishment, and Wayne is grateful and thrilled for the opportunity, even as he regrets the loss of the community at MCISB.

The trouble with a burgeoning research lab that ends its work prematurely is that the institutional knowledge and expertise built up in the collaboration are hard to codify, box up, and ship elsewhere. Ideas and emerging discoveries are, in part, relationally based, dependent on the complex interactions and conversations continually rehearsed and refined in a living community—an aspect of the “scientific method” that is rarely remarked upon. These communities can be seen as “knowledge ecologies”: a community cultivates a particular set of expertise and insight that, Wayne explained, includes knowing not only what works, but what doesn’t. As in an ecosystem, a disruption of a particular component in a sprawling chain of connections can affect the health of the whole. Like a living organism, collaboration is a complex system, and in the absence of nourishment—that is, funding—it falls apart and breaks down. As a result, the human capital specific to this community and project with its knowledge of yeast—and especially the collaborative understanding built up around that endeavor—has been lost.

And yet, given all the difficulties the lab had encountered in trying to build a model of yeast, and given that funding for science is never unlimited, and that its outcomes are never predictable, how is it possible to know whether an approach should be abandoned as a dead end or whether it just needs more money and more time to bear fruit?

Wayne fiddled a final pipette into a plastic tray and traipsed toward the analysis machine.

He waited for the graphs to appear. He would repeat this trial three more times before the night was through.

The lab bench next to his sat empty. It had been Mara’s, but above it now hung a paper sign with another name scrawled on it.

Much of Mara’s yeast work had been aimed at understanding how yeast cells grow, which the researchers had hoped might offer insight into how cancer cells grow. Ironically, while Mara was working on those experiments, her own body was growing a flurry of cancer cells. In 2012, Mara discovered she had bowel cancer, which she first conquered, only to become aware of its return in April 2013. It then quickly spread beyond control.

Mara died at the end of May in 2013. Those remaining at the Centre were devastated by her loss—Mara had been such a young, vibrant, and central presence in the community, and she was gone a year after getting sick. The researchers left at the Centre and other colleagues from the university rented a bus so they could all go to her funeral together.

Wayne was dumbfounded by Mara’s absence. When asked what he learned from her and what he had been feeling with her gone, he replied that though she was “very knowledgeable” and he had “worked with her loads,” right now, he “could hardly summarize.”

Without continued and concentrated funds for the Centre, its future is a little uncertain. Not only is a complete model of yeast still out of reach, so, too, are the insights and contributions such a model might hold for cancer research, larger organ models, the improvement of healthcare, and the entire systems biology community.

Determining that yeast was a living organism took about two hundred years—but it also took more than that. While Pasteur may seem like a one-man revolution, he was also part of a collaboration, albeit one across countries and time. His work built on the works of Kützing, Schwann, and Cagniard-Latour, who worked on yeast twenty years before Pasteur and who built their own works on that of Lavoisier, whose work predated theirs by another fifty years and who likely built his work on the research of his contemporary, Fabbroni. Moreover, it took industry investment, government support, the advent of advanced microscopes, and eminent learned men rassling over its essence before yeast was eventually understood as it is now: a fundamental unit of life.

Wayne and Mara, too, are descendents of this yeast work and scientific struggle. But while Pasteur and his contemporaries’ research was directly inspired and validated by the use of yeast in brewing and baking, Wayne and Mara’s lab is not located in a vineyard. The MCISB yeast researchers work instead in a large white office building, in the middle of Manchester, where, using the tools of modern molecular biology, they probe, pull apart, and map yeast. The small-scale science of Pasteur’s time has grown big—the distance between research and application widening as science has professionalized and institutionalized over time. The vineyard has been replaced by complex configurations of university-based laboratories, specialized health research institutes, pharmaceutical companies, policymaking bodies, regulatory agencies, funding councils, etc.—within which, researchers of all types and stripes try to organize, mobilize, set up shop, and get to the bench or the computer.

Although the modeling of yeast is certainly related to application, insights about life derived from yeast-modeling will likely take some time to result in anything that concretely and directly helps to cure cancer—because the translation of research from lab bench to patient’s bedside is far from straightforward. Sure, these fundamental investigations into the nature of life bring new knowledge, but what, exactly, yeast will teach us and how that will translate into applications is still a little unclear. In other words, whether the promises of the research will become reality is unknown. This uncertainty is difficult to handle—not only for the scientists performing the experiments, building the computer models, and composing the grants, but also for the pharmaceutical industry representatives and government policymakers making funding decisions.

The fundamental character and exact function of yeast were not understood for a long time; now, we’re struggling to understand yeast’s systematic operations. How long will this struggle take? And do we really understand what’s at stake? Within the dilemma of finite funding resources, how do we figure out what research will eventually translate into practice? Pasteur understood the importance of these issues about the funding of research. In 1878, he wrote, “I beseech you to take interest in these sacred domains so expressively called laboratories. Ask that there be more…for these are the temples of the future, wealth, and well-being. It is here that humanity will grow, strengthen, and improve. Here, humanity will learn to read progress and individual harmony in the works of nature.”

In many ways, this is what our modern yeast devotees are also hoping to discover: not only how yeast may once again be that ideal lab partner and organism that is also key to the next frontier of science, but that our communities, our funding bodies, and our scientific institutions will continue to invest the needed time, infrastructure, and patience into working with yeast and awaiting the next level of discovery this little organism has to offer us.

Niki Vermeulen ([email protected]) is a Wellcome Research Fellow in the Center for the History of Science, Technology and Medicine of the University of Manchester (UK). Molly Bain ([email protected]), a writer, teacher, and performer, is working on an MFA in nonfiction at the University of Pittsburgh.

Breaking the Climate Deadlock

Developing a broad and effective portfolio of technology options could provide the common ground on which conservatives and liberals agree.

The public debate over climate policy has become increasingly polarized, with both sides embracing fairly inflexible public positions. At first glance, there appears little hope of common ground, much less bipartisan accord. But policy toward climate change need not be polarizing. Here we offer a policy framework that could appeal to U.S. conservatives and progressives alike. Of particular importance to conservatives, we believe, is the idea embodied in our framework of preserving and expanding, rather than narrowing, societal and economic options in light of an uncertain future.

This article reviews the state of climate science and carbon-free technologies and outlines a practical response to climate deadlock. Although it may be difficult to envision the climate issue becoming depoliticized to the point where political leaders can find common ground, even the harshest positions at the polar extremes of the current debate need not preclude the possibility.

We believe that a close look at what is known about climate science and the economic competitiveness of low-carbon/carbon-free technologies—which include renewable energy, advanced energy efficiency technologies, nuclear energy, and carbon capture and sequestration systems (CCS) for fossil fuels—may provide a framework that could even be embraced by climate skeptics willing to invest in technology innovation as a hedge against bad climate outcomes and on behalf of future economic vitality.

Most atmospheric scientists agree that humans are contributing to climate change. Yet it is important to also recognize that there is significant uncertainty regarding the pace, severity, and consequences of the climate change attributable to human activities; plausible impacts range from the relatively benign to globally catastrophic. There is also tremendous uncertainty regarding short-term and regional impacts, because the available climate models lack the accuracy and resolution to account for the complexities of the climate system.

Although this uncertainty complicates policymaking, many other important policy decisions are made in conditions of uncertainty, such as those involving national defense, preparation for natural disasters, or threats to public health. We may lack a perfect understanding of the plans and capabilities of a future adversary or the severity and location of the next flood or the causes of a new disease epidemic, but we nevertheless invest public resources to develop constructive, prudent policies and manage the risks surrounding each.

Reducing atmospheric concentrations of greenhouse gases (GHGs) would require widespread deployment of carbon-free energy technologies and changes in land-use practices. Under extreme circumstances, addressing climate risks could also require the deployment of climate remediation technologies such as atmospheric carbon removal and solar radiation management. Unfortunately, leading carbon-free electric technologies are currently about 30 to 290% more expensive on an unsubsidized basis than conventional fossil fuel alternatives, and technologies that could remove atmospheric carbon from the atmosphere or mitigate climate impacts are mostly unproven and some may have dangerous consequences. At the same time, the pace of technological change in the energy sector is slow; any significant decarbonization will unfold over the course of decades. These are fundamental hurdles.

It is also reasonably clear, particularly after taking into account the political concerns about economic costs, that widespread deployment of carbon-free technologies will not take place until diverse technologies are fully demonstrated at commercial scale and the cost premium has been reduced to a point where the public views the short-term political and economic costs as being reasonably in balance with plausible longer-term benefits.

Given these twin assessments, we propose a practical approach to move beyond climate deadlock. The large cost premium and unproven status of many technologies point to a need to focus on innovation, cost reduction, and successfully demonstrating multiple strategically important technologies at full commercial scale. At the same time, the uncertainty of long-term climate projections, together with the 1000+ year lifetime of CO2 in the atmosphere, argues for a measured and flexible response, but one that can be ramped up quickly.

This can be done by broadening and intensifying efforts to develop, fully demonstrate, and reduce the cost of a variety of carbon-free energy and climate remediation technologies, including carbon capture and sequestration and advanced nuclear, renewable, and energy efficiency technologies. In addition, atmospheric carbon removal and solar radiation management technologies should be carefully researched.

Conservatives have typically been strong supporters of fundamental government research, as well as technology development and demonstration in areas that the private sector does not support, such as national security and health. Also, even the most avowed climate skeptic will often concede that there are risks of inaction, and that it is prudent for national and global leaders to hedge against those risks, just as a prudent corporate board of directors will hedge against future risks to corporate profitability and solvency. Moreover, increasing concern about climate change abroad suggests potentially large foreign markets for innovative energy technologies, thus adding an economic competitiveness rationale for investment that does not depend on one’s assessment of climate risk.

Some renewed attention is being devoted to innovation, but funding is limited and the scope of technologies is overly constrained. Our suggested policy approach, in contrast, would involve a three- to fivefold increase in R&D and demonstration spending in both the public and private sectors, including possible new approaches that involve more than simply providing the funding through traditional channels such as the Department of Energy (DOE) and the national labs.

Investing in the development of technology options is a measured, flexible approach that could also shorten the time needed to decarbonize the economy. It would give future policymakers more opportunities to deploy proven, lowercost technologies, without the commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomic. And with greater emphasis on innovation, it would allow technologies to be deployed more quickly, broadly, and cost-effectively, which would be particularly important if impacts are expected to be rapid and severe.

In addition to research, development, and demonstration (RD&D), new policy options to support technology deployment should be explored. Current deployment programs principally using the tax code have not, at least to date, successfully commercialized technologies in a widespread and cost-effective manner or provided strong incentives for continued innovation. New approaches are necessary.

Climate knowledge

Although new research constantly adds to the state of scientific knowledge, the basic science of climate change and the role of human-generated emissions have been reasonably well understood for at least several decades. Today, most climate scientists agree that human-caused warming is underway. Some of the major areas of agreement include the following:

About these basic points there is little debate, even from those who believe that the risks are not likely to be severe. Indeed, it is also true that long-term climate projections are subject to considerable uncertainty and legitimate scientific debate. The fundamental complexity of the climate system, in particular the feedback effects of clouds and water vapor, is the most important contributor to uncertainty. Consequently, long-term projections reflect considerable uncertainty in how rapidly, and to what extent, temperatures will increase over time. It is possible that the climate will be relatively slow to warm and that the effects of warming may be relatively mild for some time. But there is also a worrisome likelihood that the climate will warm too quickly for society to adapt and prosper—with severe or perhaps even catastrophic consequences.

Unfortunately, we should not expect the range of climate projections to narrow in a meaningful way soon; policymakers may hope for the best but must prepare for the worst.

Technology readiness

Under the best of circumstances, the risks associated with climate uncertainties could be managed, at least in part, with a mix of today’s carbon-free energy and climate remediation technologies. Carbon-free energy generation, as used in this paper, includes renewable, nuclear, and carbon capture and sequestration systems for fossil fuels such as coal and natural gas. Climate remediation technologies (often grouped together under the term “geoengineering”) include methods for removing greenhouse gases from the atmosphere (such as air capture), as well as processes that might mitigate some of the worst effects of climate change (such as solar radiation management). We note that energy efficiency or the pursuit of greater energy productivity is prudent even in the absence of climate risk, so it is particularly important in the face of it. Although this discussion focuses on electric generation, any effective decarbonization policy will also need to address emissions from the transportation sector; the residential, commercial, and industrial sectors; and land use. Similar frameworks, focused on expanding sensible options and hedging against a worst-case future, could be developed for each.

To be effective, carbon-free and climate remediation technologies and processes need to be economically viable, fully demonstrated at scale (if they have not yet been), and be capable of global deployment in a reasonably timely manner. Moreover, they would also need to be sufficiently diverse and economical to be deployed in varied regional economies across the world, ranging from the relatively low-growth developed world to the rapidly growing developing nations, particularly those with expanding urban centers such as China and India.

The list of strategically essential climate technologies is not long, yet each of these technologies, in its current state of development, is limited in important ways. Although their status and prospects vary in different regions of the world, they are either not yet fully demonstrated, not capable of rapid widespread global deployment, or unacceptably expensive relative to conventional energy technologies. These limitations are well documented, if not widely recognized or acknowledged. The limitations of current technologies can be illustrated by quickly reviewing the status of a number of major electricity-generating technologies.

On-shore wind and some other renewable technologies such as solar photovoltaic (PV) have experienced dramatic cost reductions over the past three decades. These cost reductions, along with deployment subsidies, have clearly had an impact. Between 2009 and 2013, U.S. wind output more than doubled, and U.S. solar output increased by a factor of 10. However, because ground-level winds are typically intermittent, wind turbines cannot be relied on to generate electricity whenever there is electrical demand, and the amount of generating output cannot be directly controlled in response to moment-by-moment changes in electric demand and the availability of other generating resources. As a consequence, wind turbines do not produce electrical output of comparable economic value to the output of conventional generating resources such as natural gas–fired power plants that are, in energy industry parlance, both “firm” and “dispatchable.” Furthermore, the cost of a typical or average onshore wind project in the United States, without federal and state subsidies, although now less than that of new pulverized coal plants, is still substantially more than a new gas-fired combined-cycle plant, which is generally considered the lowest-cost conventional resource in most U.S. power markets. Solar PV also suffers from its intermittency and variability, and significant penetration of solar PV can test grid reliability and complicate distribution system operation, as we are now seeing in Germany. Some of these challenges can be overcome with careful planning and coordinated execution, but the scale-up potential and economics of these resources could be improved substantially by innovations in energy storage, as well as technological improvements to increase renewables’ power yield and capacity factor.

Current light-water nuclear power technology is also more expensive than conventional natural gas generation in the United States, and suffers from safety concerns, waste disposal challenges, and proliferation risks in some overseas markets. Further, given the capital intensity and large scale of today’s commercial nuclear plants (which are commonly planned as two 1,000–megawatt (MW) generating units), the total cost of a new nuclear plant exceeds the market capitalization of many U.S. electric utilities, making sole-ownership investments a “bet-the-company” financial decision for corporate management and shareholders. Yet recent improvements in costs have been demonstrated in overseas markets through standardized manufacturing processes and economies of scale; and many new innovative designs promise further cost reductions, improved safety, a smaller waste footprint, and less proliferation risk.

CCS technology is also limited. Although all major elements of the technology have been demonstrated successfully, and the process is used commercially in some industrial settings and for enhanced oil recovery (EOR), it is only now on track to being fully demonstrated at two commercial-scale electric generation facilities under construction, one in the United States and one in Canada. And deploying CCS on existing electric power plants would reduce generation efficiency and increase production costs to the point where such CCS retrofits would be uneconomic today without large government incentives or a carbon price higher than envisioned in recent policy proposals.

The cost premium of these carbon-free technologies relative to that of conventional natural gas–fired combined cycle technology in the United States is illustrated in the next chart.

As shown, the total levelized cost of new natural gas combined-cycle generation over its expected operating life is roughly $67/MWh (MWh, megawatt-hour). In contrast, typical onshore wind projects (without federal and state subsidies and without considering the cost of backup power and other grid integration requirements) cost about $87/MWh. New gas-fired combined-cycle plants with CCS cost approximately $93/MWh and nuclear projects about $108/MWh. New coal plants with CCS, solar PV, and offshore wind projects are yet more costly. Taken together, these estimates generally point to a cost premium of $20 to $194/MWh, or 29 to 290%, for low carbon generation.

Some may argue that this cost premium is overstated because it does not reflect the cost of the carbon externality. This would be accurate from a conceptual economic perspective, but from a commercial or customer perspective, it is understated because it doesn’t account for the substantial costs of providing backup or stored power to overcome intermittency problems. The practical effect of this cost difference remains: However the cost premium might be reduced over time (whether through carbon pricing, other forms of regulation, higher fossil fuel prices, or technological innovation), the gap today is large enough to constitute a fundamental impediment to developing effective deployment policies.

This is evidenced in the United States by the wind industry’s continued dependence on federal tax incentives, the difficulty of securing federal or state funding for proposed utility-scale CCS projects, the slow pace of developing new nuclear plants, and the recent controversies in several states proposing to develop new offshore wind and coal gasification projects. The inability to pass federal climate legislation can also be seen as an indication of widespread concern about the cost of emissions reductions using existing technologies, the effectiveness of the legislation in the global long-term context, or both.

FIGURE 1

79

Source: EIA LCOE in AEO 2013

Cost considerations are even more fundamental in the developing world, where countries’ overriding economic goal is to raise their population’s standard of living. This usually requires inexpensive sources of electricity, and technologies that are only available at a large cost premium are unlikely to be rapidly or widely adopted.

Although there is little doubt that there are opportunities to reduce the cost and improve the performance of today’s technologies, the history of technological transformation in the energy sector is typically slow, unpredictable, and incremental because it widely employs long-lived capital-intensive production and infrastructure assets tied together through complex global industries—characteristics contributing to tremendous inertia. Engineering breakthroughs are rare, and new technologies typically take many decades to reach maturity at scale, sometimes requiring the development of new business models. As described by Arnulf Grübler and Nebojsa Nakicenovic, scholars at the International Institute for Applied Systems Analysis (IIASA), the world has only made two “grand” energy transitions: one from biomass to coal between 1850 and 1920, and a second from coal to oil and gas between 1920 and today. The first transition lasted roughly 70 years; the second has now lasted approximately 90 years.

A similar theme is seen in the electric generating industry. In the 130 years or so since central generating stations and the electric lightbulb were first established, only a handful of basic electric generating technologies have become commercially widespread. By far the most common of these is the thermal power station, which uses energy from either the combustion of fossil fuels (coal, oil, and gas) or a nuclear reactor to operate a steam turbine, which in turn powers an electric generator.

The conditions that made energy system transitions slow in the past still exist today. Even without political gridlock, it could well take many decades to decarbonize the global energy sector, a period of time that would produce much higher atmospheric concentrations of CO2 and ever-growing greater risks to society. This points to the importance of beginning the long transition to decarbonize the economy as soon as possible.

Policy implications

Given the uncertainties in climate projection, innovation, and technology deployment, developing a broad range of technology options can be a hedge against climate risk.

Technology “options” (as the term is used here) include carbon-free technologies that are relatively costly or not fully demonstrated but with innovation through fundamental and applied RD&D might become sufficiently reliable, affordable, and scalable to be widely deployed if and when policymakers determine they are needed. (They are not to be confused with other technologies, such as controls for non-CO2 GHGs such as methane and niche EOR applications of fossil CCS, which have already been commercialized.)

A technology option is analogous to a financial option. The investment to create the technology is akin to the cost of buying the financial option; it gives the owner the right but not the obligation to engage in a later transaction.

Examples of carbon-free generation options include small modular nuclear reactors (SMRs) or advanced Generation IV nuclear reactor technologies such as sodium or gas-cooled fast reactors; advanced CCS technologies for both coal and natural gas plants; underground coal gasification with CCS (UCG/CCS); and advanced renewable technologies. Developing options on such technologies (assuming innovation success) would reduce the cost premium of decarbonization, the time required to decarbonize the global economy, and the risks and costs of quickly scaling up technologies that are not yet fully proven.

In contrast to carbon-free generation, climate remediation options could directly remove carbon from the atmosphere or mitigate some of its worst effects. Examples include atmospheric carbon removal technologies (such as air capture and sequestration, regional or continental afforestation, and ocean iron fertilization) and solar radiation management technologies (such as stratospheric aerosol injection and cloud-whitening systems.) Because these technologies have the potential to reduce atmospheric concentrations or global average temperatures, they could (if proven) reduce, reverse, or prevent some of the worst impacts of climate change if atmospheric concentrations rise to unacceptably high levels. The challenge with this category of technologies will be to reduce the cost and increase the scale of application while avoiding unintended environmental and ecosystem harms that would offset the benefits they create.

Again, investing now in the development of such technology options would not create an obligation to deploy them, but it would yield reliable performance and cost data for future policymakers to consider in determining how to most effectively and efficiently address the climate issue. That is the essence of an iterative risk management process. Such a portfolio approach would also position the country to benefit economically from the growing overseas markets for carbon-free generation and other low-carbon technologies. It also addresses the political and economic polarization around various energy options, with some ideologies and interests focused on renewables, others on nuclear energy, and still others on CCS. A portfolio approach not only hedges against future climate uncertainties but also offers expanded opportunities for political inclusiveness and economic benefit. Over a period of time, investments in new and expanded RD&D programs would lead to new intellectual property that could help grow investments, design, manufacturing, employment, sales, and exports to serve overseas and perhaps domestic markets.

Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency-Energy (ARPA-E), the scope of technologies is far too constrained.

This portfolio approach would be a significant departure from current innovation and deployment policies. Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency–Energy (ARPA-E), the scope of technologies is far too constrained. For instance, despite its importance, a fully funded program to demonstrate multiple commercial-scale post-combustion CCS systems for both coal and natural gas generating technologies has yet to be established. Similarly, efforts to develop advanced nuclear reactor designs are limited, and there is almost no government support for climate remediation technologies. Renewable energy can make a large contribution, but numerous studies have demonstrated that it will probably be much more difficult and costly to decarbonize our electricity system within the next half century without CCS and nuclear power.

Our approach, in contrast, would involve a broader mix of technologies and innovation programs including the fossil, advanced nuclear, advanced renewable, and climate remediation technologies to maximize our chances of creating proven, scalable, and economic technologies for deployment.

The specific deployment policies needed would depend in part on the choice of technologies and the status of their development, but they would probably encompass an expanded suite of programs across the RD&D-to-commercialization continuum, including fundamental and applied R&D programs, incentives, and other means to support pilot and demonstration programs, government procurement programs, and joint international technology development and transfer efforts.

The innovation processes used by the federal government also warrant assessment and possible reform. A number of important recent studies and reports have critiqued past and current policies and put forward recommendations to accelerate innovation. Of particular note are recommendations to provide greater support for demonstration projects, expand ARPA-E, create new institutions (such as a Clean Energy Deployment Administration, a Green Bank, an Energy Technology Corporation, Public-Private Partnerships, or Regional Innovation Investment Boards), and promote competition between government agencies such as DOE and the Department of Defense. All of these deserve further attention.

Of course there will never be enough money to do everything. That’s why a strategic approach is essential. The portfolio should focus on strategically important technologies with the potential to make a material difference, based on analytical criteria such as:

To illustrate this further, programs might include the following.

  1. A program to demonstrate multiple CCS technologies, including post-combustion coal, pre-combustion coal, and natural gas combined-cycle technologies at full commercial scale.
  2. A program to develop advanced nuclear reactor designs, including a federal RD&D program capable of addressing each of the fundamental concerns about nuclear power. Particular attention should be given to the potential for small modular reactors (SMRs) and advanced, non–light-water reactors. A key complement to such a program would be the review and, if necessary, reform of Nuclear Regulatory Commission expertise and capabilities to review and license advanced reactor designs.
  3. Augmentation of the Department of Defense’s capabilities to sponsor development, demonstration, and scale-up of advanced energy technology projects that contribute to the military’s national security mission, such as energy security for permanent bases and energy independence for forward bases in war zones.
  4. Continued expansion of international technology innovation programs and transfer of insights from overseas manufacturing processes that have resulted in large capital cost reductions for the United States. In recent years, a number of government-to-government and business–to–nongovernmental organization partnerships have been established to facilitate such technology innovation and transfer efforts.
  5. Consideration of the use of a competitive procurement model, in which government provides funding opportunities for private-sector partners to demonstrate and deploy selective technologies that lack a current market rationale to be commercialized.

Note that this is not intended to be an exhaustive list of the efforts that could be considered, but there should be consideration of new models of public-private cooperation in technology development.

The technology options approach outlined in this paper, with its emphasis on research, development, demonstration, and innovation, serves a different albeit overlapping purpose from deployment programs such as technology portfolio standards, carbon-pricing policies, and feed-in tariffs. The options approach focuses primarily on developing improved and new technologies, whereas deployment programs focus primarily on commercializing proven technologies.

RD&D and deployment policies are generally recognized as being complementary; both would be needed to fully decarbonize the economy unless carbon mitigation was in some way highly valued in the marketplace. In practice, at least to date, technology deployment programs have not successfully commercialized carbon-free technologies in a widespread, cost-effective manner, or offered incentives to continue to innovate and improve the technology. New approaches including the use of market-based pricing mechanisms such as reverse auctions and other competitive procurement methods are likely to be more flexible, economically efficient, and programmatically effective.

Yet deploying new carbon-free technologies on a wide-spread basis over an extended period of time will be a policy challenge until the cost premium has been reduced to a level at which the tradeoffs between short-term certain costs, and long-term uncertain benefits are acceptable to the public. Until then, new deployment programs will be difficult to establish, and if they are established, they are likely to have little material impact (because efforts to constrain program costs would lead these programs to have very limited scopes) or be quickly terminated (due to high program costs), as we have seen with, for example, the U.S. Synthetic Fuels Corporation. Therefore, substantially reducing the cost premium for carbon-free energy must be a priority for both innovation and deployment programs. It is likely to be the fastest and most practical path to create a realistic opportunity to rapidly decarbonize the economy.

Although we are not proposing a specific or complete set of programs in this paper, it is fair to say that our policy approach would involve a substantial increase in energy RD&D spending—an effort that could cost between $15 billion and $25 billion per year, a three- to fivefold increase over recent energy RD&D spending levels.

This is a significant increase over historic levels but modest compared to current funding for medical research (approximately $30 billion per year) and military research (approximately $80 billion per year), in line with previous R&D initiatives over the years (such as the War on Terror, the NIH buildup in the early 2000s, and the Apollo space program), and similar to other recent energy innovation proposals.

The increase in funding would need to be paid for, requiring redirection of existing subsidies, funding a clean energy trust from federal revenues accruing from expanded oil and gas production, a modest “wires charge” on electricity rate payers, or reallocations as part of a larger tax reform effort. We are not suggesting that this would necessarily be easy, only that such investments are necessary and are not out of line with other innovation investment strategies that the nation has adopted, usually with bipartisan support. In this light, we emphasize again the political virtues of a portfolio approach that keeps technological options open and offers additional possible benefits from the potential for enhanced economic competitiveness.

In light of the uncertain but clear risk of severe climate impacts, prudence calls for undertaking some form of risk management. The minimum 50-year time period that will be required to decarbonize the global economy and the effectively irreversible nature of any climate impacts argue for undertaking that effort as soon as reasonably possible. Yet pragmatism requires us to recognize that most of the technologies needed to manage this risk are either substantially more expensive than conventional alternatives or are as yet unproven.

These uncertainties and challenges need not be confounding obstacles to action. Instead, they can be addressed in a sensible way by adopting the broad “portfolio of technology options” approach outlined in this paper; that is, by developing a diverse array of proven technologies (including carbon capture, advanced nuclear, advanced renewable, atmospheric carbon removal, and solar radiation management) and deploying the most successful ones if and when policymakers determine they are needed.

This approach would provide policymakers with greater flexibility to establish policies deploying proven, scalable, and economical technologies. And by placing greater emphasis on reducing the cost of scalable carbon-free technologies, it would allow these technologies to be deployed more quickly, broadly, and cost-effectively than would otherwise be possible. At the same time, it would not be a commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomical.

We believe that this pragmatic portfolio approach should appeal to thoughtful people across the political spectrum, but most notably to conservatives who have been skeptical of an “all-in” approach to climate that fails to acknowledge the uncertainties of both policymaking and climate change. It is at least worth testing whether such an approach might be able to break our current counterproductive deadlock.

David Garman, a principal and managing partner at Decker Garman Sullivan LLC, served as undersecretary in the Department of Energy in the George W. Bush administration. Kerry Emanuel is the Cecil and Ida Green Professor of atmospheric science at the Massachusetts Institute of Technology and codirector of MIT’s Lorenz Center, a climate think tank devoted to basic curiosity-driven climate research. Bruce Phillips is a director of The NorthBridge Group, an economic and strategic consulting firm.

How Hurricane Sandy Tamed the Bureaucracy

Remember Hurricane Irene? It pushed across New England in August 2011, leaving a trail of at least 45 deaths and $7 million in damages. But just over a year later, even before the last rural bridge had been rebuilt, Hurricane Sandy plowed into the New Jersey–New York coast, grabbing the national spotlight with its even greater toll of death and destruction. And once again, the region—and the nation—swung into rebuild mode.

Certainly, some rebuilding after such storms will always be necessary. However, this one-two punch underscored a pervasive and corrosive aspect of our society: We have rarely taken the time to reflect on how best to rebuild developed areas before the next crisis occurs, instead committing to a disaster-by-disaster approach to rebuilding.

Yet Sandy seems to have been enough of a shock to stimulate some creative thinking at both the federal and regional levels about how to break the cycle of response and recovery that developed communities have adopted as their default survival strategy. I have witnessed this firsthand as part of a team that designed a decision tool called the Sea Level Rise Tool for Sandy Recovery, to support not just recovery from Sandy but preparedness for future events. The story that has emerged from this experience may contain some useful lessons about how science and research can best support important social decisions about our built environment. Such lessons are likely to be of increasing importance as predicted climate change brings the inevitability of extreme weather events.

A story of cooperation

In the wake of Sandy, pressure mounted at all levels, from local to federal, to address one question: How would we rebuild? This question obviously has many dimensions, but one policy context cuts across them all. The National Flood Insurance Program provides information on flood risk that developers, property owners, and city and state governments are required to use in determining how to build and rebuild. Run by the Federal Emergency Management Agency (FEMA), the program provides information on the height of floodwaters, known as flood elevations, that can be used to delineate on a map where it is more or less risky to build. Flood elevations are calculated based on analysis of how water moves over land during storms of varying intensity, essentially comparing the expected elevation of the water surface to that of dry land. FEMA then uses this information to create flood insurance rate maps, and insurers use the maps to determine the cost of insurance in flood-prone areas. The cost of insurance and the risk of flooding are major factors for individuals and communities in determining how high to build structures and where to locate them to avoid serious damage during floods.

But here’s the challenge that our team faced after Sandy. The flood insurance program provided information on flood risk based only on conditions in past events, and not on conditions that may occur tomorrow. Yet coastlines are dynamic. Beaches, wetlands, and barrier islands all change in response to waves and tides. These natural features shift, even as the seawalls and levees that society builds to keep communities safe are designed to stay in place. In fact, seawalls and levees add to the complexity of the coastal environment and lead to new and different changes in coastal features. The U.S. Army Corps of Engineers implements major capital works, including flood protection and beach nourishment, to manage these dynamic features. The National Oceanic and Atmospheric Administration (NOAA) helps communities manage the coastal zone to preserve the amenities we have come to value on the coast: commerce, transportation, recreation, and healthy ecosystems, among others. And both agencies have long been doing research on another major factor of change for coastlines around the world: sea-level rise.

Any amount of sea-level rise, even an inch or two, increases the elevation of floodwaters for a given storm. Estimates of future sea-level rise are therefore a critical area of research. As Sandy approached, experts from NOAA and the Army Corps, other federal agencies, and several universities were completing a report synthesizing the state of the science on historic and future sea-level rise. The report, produced as part of a periodic updating of the National Climate Assessment, identified scenarios (plausible estimates) of global sea-level rise by the end of this century. Coupled with the best available flood elevations, the sea-level rise scenarios could help those responsible for planning and developing in coastal communities factor future risks into their decisions. This scenario-planning approach underscores a very practical element of risk management: If there’s a strong possibility of additional risk in the future, factor that into decisions today.

Few people would argue with taking steps to avoid future risk. But making this happen is not as easy as it sounds. FEMA has to gradually incorporate future flood risk information into the regulatory program even as the agency modernizes existing flood elevations and maps. The program dates back to 1968, and much of the information on flood elevations is well over 10 years old. We now have newer information on past events, more precise measurements on the elevation of land surfaces, and better understanding of how to model and map the behavior of floodwaters. We also have new technologies for providing the information via the Internet in a more visually compelling and user-specific manner. Flood elevations and flood insurance rate maps have to be updated for thousands of communities across the nation. When events like Sandy happen, FEMA issues “advisory” flood elevations to provide updated and improved information to the affected areas even if the regulatory maps are not finalized. However, neither the updated maps nor the advisory elevations have traditionally incorporated sea-level rise.

Only in 2012 did Congress pass legislation—the Biggert-Waters Flood Insurance Reform Act—authorizing FEMA to factor sea-level rise into flood elevations provided by the flood insurance program, so the agency has had little opportunity to accomplish this for most of the nation. Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

Of course, reacting to any additional flood risk resulting from higher sea levels might entail the immediate costs of building higher, stronger, or in a different location altogether. But such short-term costs are counterbalanced by the long-term benefits of health and safety and a smaller investment in maintenance, repair, and rebuilding in the wake of a disaster. So how does the federal government provide legitimate science—science that is seen by decisionmakers as reliable and legitimate—regarding future flood risk to affected communities? And how might it create incentives, financial and otherwise, for adopting additional risk factors that may mean up-front costs in return for major long-term gains?

After Sandy, leaders of government locally and nationally were quick to recognize these challenges. President Barack Obama established a Hurricane Sandy Rebuilding Task Force. Governor Mario Cuomo of New York established several expert committees to help develop statewide plans for recovery and rebuilding. Governor Chris Christie of New Jersey was quick to encourage higher minimum standards for rebuilding by adding 1 foot to FEMA’s advisory flood elevations. And New York City Mayor Michael Bloomberg created the Special Initiative on Risk and Resilience, connected directly to the city’s long-term planning efforts and to an expert panel on climate change, to build the scientific foundation for local recovery strategies.

Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

The leadership and composition of the groups established by the president and the mayor were particularly notable and distinct from conventional efforts. They brought expertise and emphasis that focused as strongly on preparedness for a future that is likely to look different from the present, as on responding to the disaster itself. For example, the president’s choice of Shaun Donovan, secretary of the Department of Housing and Urban Development (HUD), to chair the federal task force implicitly signaled a new focus on ensuring that urban systems will be resilient in the face of future risks.

New York City’s efforts have been exemplary in this regard. The organizational details are complex, but there is one especially crucial part of the story that I want to tell. When Mayor Bloomberg created the initiative on risk and resilience, he also reconvened the New York City Panel on Climate Change (known locally as the NPCC), which had been begun in 2008 to support the formulation of a long-term comprehensive development and sustainability plan, called PlaNYC. All of these efforts, which were connected directly to the Mayor’s Office of Long-term Planning and Sustainability, were meant to be forward-looking and to integrate contributions from experts in planning, science, management, and response.

Tying the response to Sandy to the city’s varied efforts signaled a new approach to post-disaster development that embraced long-term resilience: the capacity to be prepared for an uncertain future. In particular, the NPCC’s role was to ensure that the evolving vulnerabilities presented by climate change would play an integral part in thinking about New York in the post-Sandy era. To this end, in September 2012, the City Council of New York codified the operations of the NPCC into the city’s charter, calling for periodic updates of the climate science information. Of course, science-based groups such as the climate panel should be valuable for communities and decisionmakers thinking about resilience and preparedness, but often they are ignored. Thus, another essential aspect of New York’s approach was that the climate panel was not just a bunch of experts speaking from a pulpit of scientific authority, but it also had members representing local and state government working as full partners.

Within NOAA, there are programs designed to improve decisions on how to build resilience into society, given the complex and uncertain interactions of a changing society and a changing environment. These programs routinely encourage engagement among different scales and sectors of government and resource management. For example, NOAA’s Regional Integrated Sciences and Assessments (RISA) program provides funding for experts to participate in New York’s climate panel to develop risk information that informs both the response to Sandy and the conceptual framework for adaptively managing long-term risk within PlaNYC. Through its Coastal Services Center, NOAA also provides scientific tools and planning support for coastal communities facing real-time challenges. When Sandy occurred, the center offered staff support to FEMA’s field offices that were the local hubs among emergency management and disaster relief. Such collaboration and interactions between the RISA experts, the center staff, and the FEMA field offices fostered social relations that allowed for coordination in developing the Sea Level Rise Tool for Sandy Recovery.

In still other efforts, representatives of the president’s Hurricane Sandy Rebuilding Task Force and the Council on Environmental Quality were working with state and local leaders, including staff from the New York City’s risk and resilience initiative. The leaders of the New York initiative were working with representatives of NOAA’s RISA program, as well as with experts on the NPCC who had participated in producing the latest sea-level rise scenarios for the National Climate Assessment. The Army Corps participated in the president’s Task Force and also contributed to the sea-level rise scenarios report. This complex organizational ecology also helped create a social network among professionals in science, policy, and management charged with building a tool that can identify the best available science on sea-level rise and coastal flooding to support recovery for the region.

We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment.

Before moving on to the sea-level rise tool itself, I want to point out important dimensions of this social network and the context that facilitated such complex organizational coordination. Sandy presented a problem that motivated people in various communities of practice to work with each other. We all knew each other, wanted to help recovery efforts, and understood the limitations of the flood insurance program. In the absence of events such as Sandy, it is difficult to find such motivating factors; everyone is busy with his or her day-to-day responsibilities. Disaster drew people out of their daily routines with a common and urgent purpose. Moreover, programs such as RISA have been doing research not just to provide information on current and future risks associated with climate, but also to understand and improve the processes by which scientific research can generate knowledge that is both useful and actually used. Research on integrated problems and management across institutions and sectors is undervalued; how best to organize and manage such research is poorly understood in the federal government. Those working on this problem themselves constitute a growing community of practice.

Communities need to be able to develop long-term planning initiatives, such as New York’s PlaNYC, that are supported by bodies such as the city’s climate change panel. In order to do so, they have to establish networks of experts with whom they can develop, discuss, and jointly produce knowledge that draws on relevant and usable scientific information. But not all communities have the resources of New York City or the political capacity to embrace climate hazards. If the federal government wishes to support other communities in better preparing people for future disasters, it will have to support the appropriate organizational arrangements—especially those that can bridge boundaries between science, planning, and management.

Rising to the challenges

For more than two decades, the scientific evidence has been strong enough to enable estimates of sea-level rise to be factored into planning and management decisions. For example, NOAA maintains water-level stations (often referred to as tide gages) that document sea-level change, and over the past 30 years, 88% of the 128 stations in operation have recorded a rise in sea level. Based on such information, the National Research Council published a report in 1987 estimating that sea level would rise between 0.5 and 1.5 meters by 2100. More recent estimates suggest it could be even higher.

Of course, many coastal communities have long been acutely aware of the gradual encroachment of the sea on beaches and estuaries, and the ways in which hurricanes and tropical storms can remake the coastal landscape. So, why is it so hard to decide on a scientific basis for incorporating future flood risk into coastal management and development?

For one thing, sea-level rise is different from coastal flooding, and the science pertaining to each is evolving somewhat independently. Researchers worldwide are analyzing the different processes that contribute to sea-level rise. They are thinking about, among other things, how the oceans will expand as they absorb heat from the atmosphere; about how quickly ice sheets will melt and disintegrate in response to increasing global temperature, thereby adding volume to the oceans; and about regional and local processes that cause changes in the elevation of the land surface independent of changes in ocean volume. Scientists are experimenting, and they cannot always experiment together. They have to isolate questions about the different components of the Earth system to be able to test different assumptions, and it is not an easy task to put the information back together again. This task of synthesizing knowledge from various disciplines and even within closely related disciplines requires interdisciplinary assessments.

The sea-level rise scenarios that our team used in designing the Sandy tool, which derived from the National Climate Assessment prepared for Congress every four years to help synthesize and summarize the state of the climate and its impacts on society, varied greatly. The scenarios were based on expert judgments from the scientific literature by a diverse team drawn from the fields of climate science, oceanography, geology, engineering, political science, and coastal management, and representing six federal agencies, four universities, and one local resource management organization. The scenarios report provided a definitive range of 8 inches to 6.6 feet by the end of the century. (One main reason for such different projections is the current inadequate understanding of the rate at which the ice sheets in Greenland and Antarctica are melting and disintegrating in response to increasing air temperature.) The scenarios were aimed at two audiences: regional and local experts who are charged with addressing variations in sea-level change at specific locations, and national policymakers who are reconsidering potential impacts beyond any individual community, city, or even state.

But wasn’t the choice of the experts who prepared the scenarios to present such a broad range of sea-level rise estimates simply adding to policymakers’ uncertainty about the future? The authors addressed this possible concern by associating risk tolerance—the amount of risk one would be willing to accept for a particular decision—with each scenario. For example, they said that anyone choosing to use the lowest scenario is accepting a lot of risk, because there is a wealth of evidence and agreement among experts that sea-level rise will exceed this estimate by the end of the century unless (and possibly even if) aggressive global emissions reduction measures are taken immediately. On the other hand, they said that anyone choosing to use the highest scenario is using great caution, because there is currently less evidence to support sea-level rise of this magnitude by the end of the century (although it may rise to such levels in the more distant future).

Thus, urban planners may want to consider higher scenarios of sea-level rise, even if they are less likely, because this approach will enable them to analyze and prepare for risks in an uncertain future. High sea-level rise scenarios may even provide additional factors of safety, particularly where the consequences of coastal flood events threaten human health, human safety, or critical infrastructure—or perhaps all three. The most likely answer might not always be the best answer for minimizing, preparing for, or avoiding risk. Framing the scenarios in this fashion helps avoid any misperceptions about exaggerating risk. But more importantly, it supports deliberation in planning and making policy about the basis for setting standards and policies and for designing new projects in the coastal zone. The emphasis shifts to choices about how much or how little risk to accept.

In contrast to the scenarios developed for the National Climate Assessment, the estimates made by the New York City climate panel addressed regional and local variations in sea-level rise and are customized to support design and rebuilding decisions in the city that respond to risks over the next 25 to 45 years. They were developed after Sandy by integrating scientific findings published just the previous year—after the national scenarios report was released. The estimates were created using a combination of 24 state-of-the-art global climate models, observed local data, and expert judgment. Each climate model can be thought of as an experiment that includes different assumptions about global-scale processes in the Earth system (such as changes in the atmosphere). As with the national scenarios report, then, the collection of models provides a range of estimates of sea-level rise that in total convey a sense of the uncertainties. The New York City climate panel held numerous meetings throughout the spring of 2013 to discuss the model projections and to frame its own statements about the implications of the results for future risks to the city arising from sea-level rise (e.g., changes in the frequency of coastal flooding due to sea-level rise). These meetings were attended by not only physical and social scientists but also by decisionmakers facing choices at all stages of the Sandy rebuilding process, from planning to design to engineering and construction.

As our team developed the sea-level rise tool, we found minimal difference between the models used by the New York climate panel and the nationally produced scenarios. At most, the extreme national scenarios and the high-end New York projections were separated by 3 inches, and the intermediate scenarios and the mean model values were separated by 2 inches. This discrepancy is well within the limits of accuracy reflected in current knowledge of future sea-level rise. But small discrepancies can make a big difference in planning and policymaking.

New York State regulators evaluating projects proposed by organizations that manage critical infrastructure, such as power plants and wastewater treatment facilities, look to science vetted by the federal government as a basis for approving new or rebuilt infrastructure. Might the discrepancies between the scenarios produced for the National Climate Assessment and the projections made by the NPCC, however small, cause regulators to question the scientific and engineering basis for including future sea-level rise in their project evaluations? Concerned about this prospect, the New York City Mayor’s Office wanted the tool to use only the projections of its own climate panel.

The complications didn’t stop there. In April 2013, HUD Secretary Donovan announced a Federal Flood Risk Reduction Standard, developed by the Hurricane Sandy Rebuilding Task Force, for federal agencies to use in their rebuilding and recovery efforts in the regions affected by Sandy. The standard added 1 foot to the advisory flood elevations provided by the flood insurance program. Up to that point, our development team had been working in fairly confidential settings, but now we had to consider additional questions. Would the tool be used to address regulatory requirements of the flood insurance program? Why use the tool instead of the advisory elevations or the Federal Flood Risk Reduction Standard? How should decisionmakers deal with any differences between the 1-foot advisory elevation and the information conveyed by the tool? We spent the next two months addressing these questions and potential confusion over different sets of information about current and future flood risk.

Our team—drawn from NOAA, the Army Corps, FEMA, and the U.S. Global Change Research Program—released the tool in June 2013. It provides both interactive maps depicting flood-prone areas and calculators for estimating future flood elevations, all under different scenarios of sea-level rise. Between the time of Secretary Donovan’s announcement and the release of the tool, the team worked extensively with representatives from FEMA field offices, the New York City climate panel, the New York City Mayor’s Office, and the New York and New Jersey governors’ offices to ensure that the choices about the underlying scientific information were well understood and clearly communicated. The social connections were again critical in convening the right people from the various levels of government and the scientific and practitioner communities.

During this period, the team made key changes in how the tool presented information. For example, the Hurricane Sandy Rebuilding Task Force approved the integration of sea-level rise estimates from the New York climate panel into the tool, providing a federal seal of approval that could give state regulators confidence in the science. This decision also helped address the minimal discrepancies between the long-term scenarios of sea-level rise made for the National Climate Assessment and the shorter-term estimates made by the New York climate panel. The President’s Office of Science and Technology Policy also approved expanding access to the tool via a page on the Global Change Research Program’s website. This access point helped distinguish the tool as an interagency product separate from the National Flood Insurance Program, thus making clear that its use was advisory, not mandated by regulation. Supporting materials on the Web site (including frequently asked questions, metadata, planning context, and disclaimers, among others) provided background detail for various user communities and also helped to make clear that the New York climate panel sea-level rise estimates were developed through a legitimate and transparent scientific process.

The process of making the tool useful for decisionmakers involved diverse players in the Sandy recovery story discussing different ideas about how people and organizations were considering risk in their rebuilding decisions. For example, our development team briefed a diverse set of decisionmakers in the New York and New Jersey governments to facilitate deliberations about current and future risk. Our decision to use the New York City climate panel estimates in the tool helped to change the recovery and rebuilding process from past- to future-oriented, not only because the science was of good quality but because integration of the panel’s numbers into the tool brought federal, state, and city experts and decisionmakers together, while alleviating the concerns of state regulators about small discrepancies between different sea-level rise estimates.

In 2013, New York City testified in a rate case (the process by which public utilities set rates for consumers) and called for Con Edison (the city’s electric utility) and the Public Service Commission to ensure that near-term investments are made to fortify utility infrastructure assets. Con Edison has planned for $1 billion in resiliency investments that address future risk posed by climate change. As part of this effort, the utility has adopted a design criteria that uses FEMA’s flood insurance rate maps that are based on 100-year flood elevations, plus 3 feet to account for a high-end estimate of sea-level rise by mid-century. This marked the first time in the country that a rate case explicitly incorporated consideration of climate change.

New York City also passed 16 local laws in 2013 to improve building codes in the floodplain, to protect against future risk of flooding, high winds, and prolonged power outages. For example, Local Law 96/2013 adopted FEMA’s updated flood insurance rate maps with additional safety standards for some single-family homes, based on sea-level rise as projected by the NPCC.

Our development team would never have known about New York City’s need to develop a rate case with federally vetted information on future risk, if we had not worked with officials from the city’s planning department. Engaging city and state government officials was useful not just for improving the clarity and purpose of the information in the tool. It was also useful for choosing what information would be included in the tool to enable a comprehensive and implementable strategy.

The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used.

Different scales of government—local, state, and federal—have to be able to lead processes for bringing appropriate knowledge and standards into planning, design, and engineering. Conversely, all scales of government need to validate the standards revealed by these processes, because they all play a role in implementation.

Building resilience capacity

This complex story has a particularly important yet unfamiliar lesson: Planning departments are key partners in helping break the cycle of recovery and response, and in helping people adopt lessons learned from science into practice. Planners at different levels of government convene different communities of practice and disciplinary expertise around shared challenges. Coincidentally, scientific organizations that cross the boundaries between these different communities—such as the New York City climate panel and the team that developed the sea-level rise tool—can also encourage those interactions. As I’ve tried to illustrate, planning departments convene scientists and decisionmakers alike to work across organizational boundaries that under normal circumstances help to define their identities. These are important ingredients for preparing for future natural disasters and increasing our resilience to them over the long term, and yet this type of science capacity is barely supported by the federal government. How might the lessons from the Sandy Sea Level Rise Recovery Tool and Hurricane Sandy be more broadly adopted to help the nation move away from disaster-by-disaster policy and planning? Here are two ideas to consider in the context of coastal resilience.

First, re-envision the development of resilient flood standards as planning processes, not just numbers or codes.

Planning is a comprehensive and iterative function in government and community development. Planners are connected to or leading the development of everything from capital public works projects to regional plans for ecosystem restoration. City waterfronts, wildlife refuges and restored areas, and transportation networks all draw the attention of planning departments.

In their efforts, planners seek to keep development goals rooted in public values, and they are trained, formally and informally, in the process of civic engagement, in which citizens have a voice in shaping the development of their community. Development choices include how much risk to accept and whether or how the federal government regulates those choices. For this reason, planners maintain practical connections to existing regulations and laws and to the management of existing resources. Their position in the process of community development and resource management requires planners to also be trained in applying the results of research (such as sea-level rise scenarios) to design and engineering. Over the past decade, many city, state, and local governments have either explicitly created sustainability planner positions in high levels (such as mayors’ or governors’ offices) or reframed their planning departments to emphasize sustainability, as in the case of New York City. The planners in these positions are incredibly important for building resilience into urban environments; not because they see the future, but because they provide a nucleus for convening the diverse constituencies from which visions of, and pathways to, the future are imagined and implemented.

If society is to be more resilient, planners must be critical actors in government. We cannot expect policymakers and the public to simply trust or comprehend or even find useful what we learn from science. We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment. And yet, despite their critical role in achieving resilience, many local planning departments across the country have been eliminated during the economic downturn.

Second, configure part of our research and service networks to be flexible in response to emergent risk.

The federal government likes to build new programs, sometimes at the expense of working through existing ones, because new initiatives can be political instruments for demonstrating responsiveness to public needs. But recovery from disasters and preparation to better respond to future disasters can be supported through existing networks. Across the span of lands under federal authority, FEMA has regional offices that work with emergency managers, and NOAA supports over 50 Sea Grant colleges that engage communities in science-based discussions on issues related to coastal management. Digital Coast, a partnership between NOAA and six national, regional, and state planning and management organizations, provides timely information on coastal hazards and communities. These organizations work together to develop knowledge and solutions for planners and managers in coastal zones, in part by funding university-based science-and-assessment teams. The interdisciplinary expertise and localized focus of such teams help scientists situate climate and weather information in the context of ongoing risks such as sea-level rise and coastal flooding. All of these efforts contributed directly and indirectly to the Sea Level Rise Tool before, during, and after Hurricane Sandy.

The foundational efforts of these programs exemplify how science networks can leverage their relationships and expertise to get timely and credible scientific information into the hands of people who can benefit from it. Rather than creating new networks or programs, the nation could support efforts explicitly designed to connect and leverage existing networks for risk response and preparation. The story I’ve told here illustrates how existing relationships within and between vibrant communities of practice are an important part of the process of productively bringing science and decisionmaking together. New programs are much less effective in capitalizing on those relationships.

One way to support capacities that already exist would be to anticipate the need to distribute relief funds to existing networks. This idea could be loosely based on the Rapid Response Research Grants administered by the National Science Foundation, with a couple of important variations from its usual focus on supporting basic research. Agencies could come together to identify a range of planning processes supported by experts who work across communities of practice to ensure a direct connection to preparedness for future natural disasters of the same kind. These priority-setting exercises might build on the interagency discussions that occur as part of the federal Global Change Research Program. Also, since any such effort would require engagement between decisionmakers and scientists, recipients of this funding would be asked to report on the nature of additional, future engagement. What further engagement is required? Who are the critical actors, and are they adequately supported to play a role in resilience efforts? How are those networks increasing resilience over time? Gathering information about questions such as these is critical for the federal government to make science policy decisions that support a sustainable society.

Working toward a collective vision

The shift from reaction and response to preparedness seems like common sense, but as this story illustrates, it is complicated to achieve. One reaction to this story might be to replicate the technology in the sea-level rise tool or to apply the same or similar information sets elsewhere. The federal government has already begun such efforts, and this approach will supply people with better information.

Yet across the country, there are probably hundreds of similar decision tools developed by universities, nongovernmental organizations, and businesses that depict coastal flooding resulting from sea-level rise. The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used. By connecting those discussions to existing planning processes, we reached different scales of government with different responsibilities and authority for reaching the overarching goal of developing more sustainable urban and coastal communities.

This story suggests that the role of science in helping society to better manage persistent environmental problems such as sea-level rise is not going to emerge from research programs isolated from the complex social and institutional settings of decisionmaking. Science policies aimed at achieving a more sustainable future must increasingly emphasize the complex and time-consuming social aspects of bringing scientific advance and decisionmaking into closer alignment.

Conservatism and Climate Science

It is not news to say that climate change has become the most protracted science and policy controversy of all time. If one dates the beginning of climate change as a top tier public issue from the Congressional hearings and media attention during the summer of 1988, shortly after which the UN Framework Convention on Climate Change was set in motion with virtually unanimous international participation, it is hard to think of another policy issue that has gone on for a generation with the arguments—and the policy strategy—essentially unchanged as if stuck in a Groundhog Day loop, and with so little progress being made relative to the goals and scale of the problem as set out. Even other areas of persistent scientific and policy controversy—such as chemical risk and genetically modified organisms—generally show some movement toward consensus or policy equilibrium out of which progress is made.

There has always been ideological and interest group division about environmental issues, but the issue of climate change has become a matter of straight partisan division, with Republicans now almost unanimously hostile to the climate science community and opposed to all proposed greenhouse gas emissions regulation. Beyond climate, Republicans have become almost wholly disengaged from the entire domain of environmental issues.

This represents a new situation. Even amidst contentious arguments in the past, major environmental legislation such as the Clean Air Act of 1990 passed with ample bipartisan majorities. Not only did the first Bush Administration engage the issue of climate change in a serious way, as recently as a decade ago leading Republicans, including two who became presidential nominees, were proposing active climate policies of various kinds (John McCain in the Senate, and Gov. Mitt Romney in Massachusetts).

It is tempting to view this divide as another casualty of the deepening partisanship occurring almost across the board in recent years, which has seen formerly routine compromises over passing budgets become fights to the death. This kind of partisan polarization is fatal to policy change in almost every area, as the protracted fight over the Affordable Care Act shows.

Yet the increasing partisan divide about nearly everything should prompt more skepticism about a popular narrative said to explain conservative resistance to engaging climate change: that conservatives—or at least the Republican political class—have become “anti-science.” As a popular book title has it, there is a Republican “war” on science, but science has little to do with the partisan divisions over issues such as health care reform, education policy, labor rules, or tax rates. And if one wants to make the politicization of science primarily a matter of partisan calculation, a full balance sheet shows numerous instances of liberals—and Democratic administrations—disregarding solid scientific findings that contradict their policy preferences, or cutting funding for certain kinds of scientific research. Examples include the way many prominent liberals exhibit blanket opposition to genetically modified organisms, some childhood vaccines, or, to pick a narrow case, how the U.S. Fish and Wildlife Service has ignored recommendations of its own science advisory board on endangered species controversies. A closer look at what drives liberal attitudes about some of these controversies will find that their reasons are similar or identical to the reasons conservatives are critical of policy-relevant science in climate and other domains—neither side is very compelled by science that contradicts strongly held views about how politics and policies ought to be carried out. In other words, the ideological argument over science today merely replicates many of the other arguments between left and right today based on long-standing philosophical premises or principles.

Drawing back to a longer time horizon, one discovers the counter-narrative reality that government funding for science research often grew faster under Republican than Democratic administrations. Ronald Reagan, for example, supported the large appropriation for the super-conducting supercollider; Bill Clinton cancelled the project for fiscal reasons. George W. Bush committed the U.S. to joining the international ITER consortium to pursue fusion energy, but the new Democratic Congress of 2007-2008 refused to appropriate the U.S. pledge.

President Obama lent some credence to the popular narrative with the brief line in his first inaugural address that “We will restore science to its rightful place.” Rather than write off this comment as a partisan shot at the outgoing Bush administration, we should take up the implicit challenge of thinking anew about what is the “rightful” place of science in a democracy. So let me step back from climate for a moment to consider some of the serious reservations or criticisms conservatives have about science generally, and especially science combined with political power. My aim here is both to help provide a fresh understanding of the sources of the current impasse, and to suggest how the outline of a conservative climate policy might come into view—albeit a policy framework that would be unacceptably weak to the environmental establishment.

Modern science and its discontents

The conservative ambivalence or hostility toward the intersection of science and policy can be broken down into three interconnected parts: theoretical, practical, and political. I begin by taking a brief tour through these three dimensions, for they help explain why appeals to scientific authority or “consensus” are guaranteed to be effective means of alienating conservatives and spurring their opposition to most climate initiatives. At the root of many controversies today, going far beyond climate change, are starkly different perspectives between left and right about the nature and meaning of reason and the place of science.

From the earliest days of the scientific revolution dating back to the Enlightenment, conservatives (and many liberals, too) were skeptical of the claims of science to superior authority based on cracking the code of complete objectivity. Keep in mind that prior to the modern scientific revolution, “science” comprised both material and immaterial aspects of reality, which is why “natural philosophy” and “moral science” were regarded as equivalent branches of human knowledge. The special, or as we might nowadays say the “privileged” dignity of the physical or natural sciences, the view that only scientific knowledge is real knowledge, was unknown. Today science is the most powerful idea in modern life, and it does not easily accommodate or respect “nonscientific” perspectives. This collective confidence can be observed most starkly in the benign condescension with which the “hard” sciences regard social science and the humanities in most universities (and the almost pathetic fervor with which some social science fields seek to show that they really are as quantitative and thus inaccessible to non-expert understanding as physics).

Even if the once grand ambition of working out a theory of complete causation for everything is no longer seriously maintained by most scientists, the original claim of scientific pre-eminence, best expressed in Francis Bacon’s famous phrase about the use of science “for the relief of man’s estate”—that is, for the exercise of control over nature—remains firmly planted. And even if we doubt that scientific completeness can ever be achieved in the real world, the residual confidence in the scientific command and control of the behavior of matter nonetheless implies that the command and control of human behavior is the legitimate domain of science.

The scientific problem deepened with the rise of social science in the 19th century, and especially the idea that what is real in the world can be cleanly separated from our beliefs about how the world should be—the infamous fact-value distinction. The conservative objection to the fact-value distinction was based not merely on the depreciation of moral argument, but more on the implied insistence that the freedom of the human mind was a primitive idea to be overcome by science. B.F. Skinner’s crude behaviorism of 50 years ago has seen the beginnings of a revival in the current interest in neuroscience (and behavioral economics), which may also portend a revival of a much more sophisticated updating of the Skinnerian vision of therapeutic government. If we really do succeed in unlocking as never before the secrets of how brain activity influences behavior, moral sentiments, and even cognition itself, will the call for active modification against “anti-social” behavior be far behind?

But even well short of that old prospect, one of the most basic problems of social science, from a conservative point of view (though many liberals will acknowledge this point) is that despite its claims to scientific objectivity, it cannot escape a priori “value judgments” about what questions and desired outcomes are the most salient. This turns out to be the Achilles heel of all social science, which tries to conduct itself with the same confidence and sophistication as the physical sciences, but which in the end cannot escape the fact that its enterprise is indeed “social.” We can really see this social dimension at work in the “climate enterprise”—my shorthand term for the two sides, science and policy, of the climate change problem. The climate enterprise is the largest crossroads of physical and social science ever contemplated.

The social science side of climate policy vividly displays the problem of fundamental disagreement over “normative” questions. Although we can apply rigorous economic analysis to energy forecasts and emission control pathways, the arguments over proper discount rates and the relative weight of the tradeoff between economic growth and emissions constraint cannot be resolved objectively, that is to say, scientifically. Climate action advocates are right to press the issue of intergenerational equity, but like “sustainability,” a working definition or meaningful framework for guiding policy is nearly impossible to settle. The ferocious conflicts over assessment of proposed climate policy should serve as a healthy reminder that while the traditional physical sciences can tell us what is, they cannot tell us what to do.

This is only one of the reasons why the descent from the theoretical to the practical level leads conservatives to have doubts about the reach and ambition of supposedly science-grounded policies in just about every area, let alone climate change. In environmental science and policy, environmentalists like to emphasize the interconnectedness of everything, the crude popular version of which is the “butterfly effect,” where a butterfly beating its wings in Asia results in a hurricane in the Gulf of Mexico. Conservatives don’t disagree with the interconnectedness of things. Quite the opposite; the interconnectedness of phenomena is in many ways a core conservative insight, as any reader of Edmund Burke will perceive. But drawing from Burke, conservatives doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power. In its highest and most serious form, this skepticism flows not from the style of monkey-trial ignorance or superstition associated with Inherit the Wind, but from the cognitive or epistemological limitations of human knowledge and action associated with philosophers like Friedrich Hayek and Karl Popper (among others), which tells us that knowledge is always partial and contingent and subject to correction, all the more so as we move from the particular and local to the general and global.

Thus, the basic practical defect of scientific administration is the “synoptic fallacy” that we can command enough information and make decisions about resources and social phenomena effectively enough to achieve our initial goals. Conservative skepticism is less about science per se than its claims to usefulness in the policy realm. This skepticism combines with the older liberal view—that is, the view that values individual freedoms above all else—that the concentration of discretionary political power required for nearly all schemes of comprehensive social or economic management are a priori suspect. Today that older liberal view is the core of political conservatism. Put more simply or directly, the conservative distrust of authority based on claims of superior scientific knowledge reflects a distrust of the motives of those who make such claims, and thus a mistrust of the validity of the claims themselves.

This practical policy difficulty might be overcome or compromised, as has happened occasionally in the past, if it weren’t for how the politics of science currently fall out today. In a sentence, the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction. Put more directly, it seems many scientists have chosen partisan sides. Some scientists are quite open about their leftward orientation. In 2004, Harvard geneticist Richard Lewontin wrote a shocking admission in the New York Review of Books: “Most scientists are, at a minimum, liberals, although it is by no means obvious why this should be so. Despite the fact that all of the molecular biologists of my acquaintance are shareholders in or advisers to biotechnology firms, the chief political controversy in the scientific community seems to be whether it is wise to vote for Ralph Nader this time.” (With political judgment this bad, is it any wonder there might be doubts about the policy prescriptions of scientists?) MIT’s Kerry Emanuel, a Republican, but as mainstream as they come in climate science (Al Gore referenced his work, and in one of his books Emanuel refers to Sen. James Inhofe as a “scientific illiterate” and climate skeptics as les refusards), offers this warning to his field: “Scientists are most effective when they provide sound, impartial advice, but their reputation for impartiality is severely compromised by the shocking lack of political diversity among American academics, who suffer from the kind of group-think that develops in cloistered cultures. Until this profound and well-documented intellectual homogeneity changes, scientists will be suspected of constituting a leftist think tank.”

This partisan tilt—real or exaggerated—among the scientific establishment aggravates a general problem that afflicts nearly all domains of policy these days, namely, the way in which policy is distorted by special interests and advocacy groups in the political process. Hence we end up with energy policies favoring politically connected insiders (such as federal loan guarantees for the now-bankrupt Solyndra solar technology company) or subsidizing technologies (currently wind, solar, and ethanol) that are radically defective or incommensurate with the scale of the climate problem they are intended to remedy. The loop-holes, exceptions, and massive sector subsidies (especially to coal) of the Waxman-Markey cap-and-trade bill of 2009 rendered the bill a farce even on its own modest terms and should have appalled liberals and environmentalists as much as conservatives.

Here the political naiveté of scientists does their cause a disservice with everyone; the energy policy of both political parties since the first energy shocks of the 1970s has been essentially a frivolous farce of special interest favoritism and wishful thinking, with little coherence and even less long-term care for the kind of genuine energy innovation necessary to address prospective climate change on the extreme range of the long-run projections.

Is “conservative climate policy” an oxymoron?

To be sure, few or no Republican office holders are able to articulate this outlook with deep intellectual coherence, but then neither are most liberals capable of expressing their zealous egalitarian sentiments with the rigor of, say, John Rawls’ Theory of Justice. And this should not excuse the near complete Republican negligence on the whole range of environmental issues. But even if social psychologist Jonathan Haidt is correct (and I think he is) that liberals and conservatives emotionally perceive and respond to issues from deep-seated instincts rather than carefully reasoned dialectics, the divisions among us are susceptible to some rational understanding. Can the fundamental differences be harmonized or compromised?

The first point to grasp is that conservatives—or least the currently dominant libertarian strain of the right—ironically have a more open-ended outlook toward the future than contemporary liberals. The point here is not to sneak in climate skepticism, but policy skepticism, as the future is certain to unfold in unforeseen ways, with seemingly spontaneous and disruptive changes occurring outside the view or prior command of our political class. One current example is the fracking revolution in natural gas, which is significantly responsible for U.S. per capita carbon dioxide emissions falling to their lowest level in nearly 20 years. No one, including the gas industry itself, foresaw this coming even as recently as a decade ago. (And if the political class in Washington had seen it coming, it likely would have tried to stop it; many environmentalists are deeply ambivalent about fracking at the moment.) And one key point is that the fracking revolution occurred overwhelmingly in the absence of any national policy prescription. The bad news, from a conventional environmental point of view, is the fracking revolution, now extending to oil, is just beginning. It has decades to run, in more and more places around the world. This means the age of oil and gas is a long way from being over, and this is going to be true even in a prospective regime of rising carbon taxes. (The story is likely to be much the same for coal.)

More broadly, however, it is not necessary to be any kind of climate skeptic to be highly critical of the narrow, dreamlike quality the entire issue took on from its earliest moments. Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution, to be attacked with the typical emissions control policies—sort of a global version of the Clean Air Act. Likewise the diplomatic framework, a cross between arms control, trade liberalization, and the successful Montreal Protocol, was poorly suited to climate change and destined the Kyoto Protocol model to certain failure from the outset. If one were of a paranoid or conspiratorial state of mind, you might almost wonder if the first Bush Administration committed the U.S. to this framework precisely as a way of assuring it would be self-defeating. (I doubt they were that clever or devious.) There have been a few lonely voices that have recognized these defects while still arguing in favor of action, such as Gwyn Prins of the London School of Economics and Steve Rayner of Oxford University. Two years before the failure of the 2009 Copenhagen talks, Prins and Rayner argued in Nature magazine that we should ditch the “top-down universalism” of the Kyoto approach in favor of a decentralized approach that resembles American federalism.

If ever there was an issue that required patient and fresh thinking, it was climate change 25 years ago. The modern world, especially those still billions of people striving to escape energy poverty, demands abundant amounts of cheap energy, and no amount of wishful thinking (or government subsidies or mandates) will change this. The right conceptual understanding of the problem is that we need large-scale low- and non-carbon energy sources that are cheaper than hydrocarbon energy. Unfortunately, no one knows how to do this. No one seems to know how to solve immigration, poor results from public education, or the problem of generating faster economic growth either, but we haven’t locked ourselves into a single policy framework that one must either be for or against in the same way that we have done for climate policy. Environmentalists and policy makers alike crave certainty about the policy results ahead of us, and an emphasis on innovation, even when stripped of the technological fetishes and wishful thinking that has plagued much of our energy R&D investments, cannot provide any degree of certainty about paths and rates of progress. But it was a fatally poor choice to emphasize, almost to the exclusion of any other frameworks, a policy framework based on making conventional hydrocarbon energy, upon which the world depends utterly for its well-being, more expensive and artificially scarce. This might make some emissions headway in rich industrial nations, although it hasn’t in most of them, but won’t get far in the poorer nations of the world. Subsidizing expensive renewable energy is a self-defeating mugs game, as many European nations are currently recognizing.

While we stumble along trying to find breakthrough energy technologies with a low likelihood of success in the near and intermediate term, a more primary conservative orientation comes into view. The best framework for addressing large-scale disruptions from any cause or combination of causes is building adaptive resiliency. Too often this concept gets reduced to the defeatist concept of building seawalls, moving north, and installing more air conditioners. But humankind faces disasters and chronic calamities of many kinds and causes; think of droughts, which through history have been a scourge of civilizations. Perhaps it is grandiose or simplistic to say that the whole human story is one of gradually increasing adaptive resiliency. On the other hand, what was the European exploration and settlement of North America but an exercise in adaptive resiliency? This opens into one of the chief conservative concerns over climate change and many other problems: the pessimism that becomes a self-fulfilling prophecy. As the British historian Thomas Macaulay wrote in 1830, “On what principle is it that, when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?” The 20th century saw global civilization overcome two near-apocalyptic wars and numerous murderous regimes (not all entirely overcome today), and endure 40 years of nuclear brinksmanship that threatened a nuclear holocaust in 30 minutes. To suggest human beings can’t cope with slow moving climate change is astonishingly pessimistic, and the relentless soundings of the apocalypse have done more to undermine public interest in the issue than the efforts of the skeptical community.

One caveat here is the specter of a sudden “tipping point” leading to a rapid shift in climate conditions, perhaps over a period of mere decades. To be sure, our capacity to respond to sudden tipping points is doubtful; consider the problematic reaction to the tipping point of September 11, 2001, or the geopolitical paroxysms induced by the tipping point reached in July 1945 in Alamogordo, New Mexico. The climate community would be correct to object that the open-ended and uncertain orientation I have sketched here would likely be adequate for preparing for such a sudden change—but then again neither was the Kyoto Protocol approach that they so avidly supported.

Right now the fallback position for a tipping point scenario is geoengineering, or solar radiation management. There might ironically be surprising agreement between environmentalists and conservatives over geoengineering, albeit for opposite reasons that illustrate the central division outlined above. Liberal environmentalists tend to dislike geoengineering proposals partly for plausible philosophical reasons—humans shouldn’t be experimenting with the globe’s atmospheric system any more than we already are—and partly because of their abiding dislike of hydrocarbon energy that geoengineering would further enable. Environmentalists have compared geoengineering to providing methadone to a heroin addict, though the “oil addiction” metaphor, popular with both political parties (former oil man George W. Bush used it) is truly risible. We are also addicted to food, and to having a roof over our heads. But conservatives tend to be skeptical or opposed to geoengineering for the epistemological reasons alluded to above: the uncertainties involved with the global-scale intervention are unlikely to be known adequately enough to assure a positive outcome. Geoengineering may yet emerge as a climate adaptation tool out of emergency necessity, but it will be over the strong misgivings of both left and right alike. This shared hesitation might ironically make it possible for research on geoengineering to proceed with a lower level of distrust.

President Obama’s recent call for a new billion-dollar climate change fund aimed at research on adaptation and resiliency appears in general terms close to what I hint at here. Whether a billion dollars is a suitable amount (rather, it seems the opening bid for any spending initiative in Washington these days) or whether the fund would be spent sensibly rather than politically is an important but second order question.

The final difference between liberals and conservatives over climate change that is essential to grasp is wholly political in the high and low sense of the term. Some prominent environmentalists, and fellow travelers like New York Times columnist Thomas Friedman, periodically express open admiration for authoritarian power to resolve climate change and other problems for which democratic governments are proving resistant precisely because of their responsiveness to public opinion—what used to be understood and celebrated as “consent of the governed.” A few environmental advocates have gone as far as to say that democracy itself should be sacrificed to the urgency of solving the climate crisis, apparently oblivious to the fact that appeals to necessity in the face of external threats have been the tyrant’s primary self-justification since the beginning of conscious human politics, and seldom ends well for the tyrant and the people alike. For example, Mayer Hillman, a senior fellow at Britain’s Policy Studies Institute and author of How We Can Save the Planet, told a reporter some time back that “When the chips are down I think democracy is a less important goal than is the protection of the planet from the death of life, the end of life on it. This [resource rationing] has got to be imposed on people whether they like it or not.” Similar sentiments are found in the book The Climate Change Challenge and the Failure of Democracy by Australians David Shearman and Joseph Wayne Smith. One of the authors (Shearman) argued that “Liberal democracy is sweet and addictive and indeed in the most extreme case, the USA, unbridled individual liberty overwhelms many of the collective needs of the citizens… There must be open minds to look critically at liberal democracy. Reform must involve the adoption of structures to act quickly regardless of some perceived liberties.”

I can think of no other species of argument more certain to provoke enthusiasm for Second Amendment rights than this. The unfortunate drift toward anti-democratic authoritarianism flows partly from frustration but also from the success the environmental community has enjoyed through litigation and a regulatory process that often skirts democratic accountability—sometimes with decent reason, sometimes not. But this kind of aggrandized hallucination of the virtues of power will prove debilitating as the scope and scale of an environmental problem like climate change enlarges. I can appreciate that many climate action advocates will find much of what I’ve said here to be inadequate, but above all liberals and environmentalists would do well to take on board the categorical imperative of climate policy from a conservative point of view, namely, that whatever policies are developed, they must be compatible with individual liberty and democratic institutions, and cannot rely on coercive or unaccountable bureaucratic administration.

A Survival Plan for the Wild Cyborg

In order to stay human in the current intimate technological revolution, we must become high-tech people with quirky characters. Here are seven theses to nail to the door of our technological church.

Today, the most exciting discoveries and technological developments have to do with us, we humans. Technology settles itself rapidly around and within us; collects more and more data about us; and increasingly is able to simulate human appearances and behaviour. As our relationship with technology is becoming more and more intimate, we are becoming techno-people or cyborgs. On the one hand, intimate technology offers opportunities for personal development and more control over our lives. On the other hand, governments, businesses, and other citizens may also deploy intimate technologies in order to influence or even coerce us. To put this development on the public and political agenda, the Rathenau Instituut in the Netherlands has coined the term “intimate-technological revolution,” which is partly driven by smartphones, social media, sensor networks, robotics, virtual worlds, and big data analysis. We describe this revolution in our report Intimate Technology: The Battle for Our Body and Behavior.

The fact that our selves are becoming increasingly intertwined with technology is illustrated by the ever-shrinking computer: from desktop to laptop, then tablet to mobile phone, and soon to e-glasses, and possibly in the long term to contact lenses. This shift from the table to lap, from hand to nose and even eye, shows us how technology creeps into us. For the time being, the demarcation line is typically just on the outside, but a variety of implantable devices—for example, cochlear implants for the deaf and deep brain stimulation electrodes for treating Parkinson’s disease and severely depressed patients—are already positioned inside the body.

Through our smartphone, smart shoes, sports watches, and life-logging cameras, we constantly inform the outside world about ourselves: obviously where we are through global positioning systems, but also what we are thinking and doing through social media. Once considered entirely private, that information is now accessible for literally the whole world to know. To some extent we now maintain our most intimate relationships by digital means. Social media are enabling new forms of relationships, from long-term and stable to short and volatile. And then there are phone apps that help us try to achieve our good intentions, such as exercising more or eating fewer sweets. They behave like compassionate but strict coaches, monitoring our metabolism and massaging our psyche.

The convergence of nanotechnology, biotechnology, information technology, and cognitive science increasingly turns biology into technology, and technology into biology. The convergence takes on three concrete forms. First, we are more and more like machines, and can thus be taken apart for maintenance and repair work and can perhaps even be upgraded or otherwise improved. Second, our interactions with one another are changing, precisely because machines are increasingly nestling into our private and social lives. And third, machines are becoming more and more humanlike, or at least engineers do their best to build in human traits, so that these machines seem to be social and emotional, and perhaps even moral and loving.

Mechanistic views of nature and mind have existed for centuries, but only recently has technology actually gained much control over our bodies and minds.

This development raises several fundamental questions: How close and intimate can technology become? At what point is technology still nicely intimate, and when does it become intimidating? Where do we have to set boundaries?

Mechanistic views of nature and mind have existed for centuries, but only recently has technology actually gained much control over our bodies and minds. Our hips and knees are now replaceable parts. Deafness, balance disorders, depression, anxiety trauma, heart irregularities, and innumerable other maladies have become, through the use of implants and pills, machine maintenance and performance problems. And now we begin to move to never-before-seen performance levels such as the eyeborg, the implant used by the colorblind artist Neil Harbisson to transform every hue into audible sound, with the result that he now hears colors, even the infrared and ultraviolet normally invisible to us.

The idea of intimacy used to pertain to matters of our body and mind that we would share only with people who were close to us: our immediate family members and true friends. We shared our personal intimacies by talking face to face, later with remote communication by writing letters, and then by telephone. The increasing role of technology for broadcasting information destabilizes this traditional and simple definition of what is “intimate.” On the social network Lulu, female students share their experiences about their ex-boyfriends; the geo-social network Foursquare allows users to announce their exact location in real time to all of their friends.

Consumer apps are coming that will recognize faces, analyze emotions, and link this data to our LinkedIn and Facebook profiles. When wearing Google Glasses, we ourselves will be as transparent as glass, for other computer eyeglass wearers can see who we are, what we do, who our friends are, and how we feel. Information about other people will be omnipresent, and so will information about other items in our environment. In a certain radius around Starbucks outlets, computer eyeglass wearers will receive alerts about the specialty coffee of the week, or tea if that’s their preference.

In other intimate interactions, technology has a growing intensity. Equipment has been produced that can enable parents at home to “tele-hug” a premature newborn in a hospital incubator. One in 10 love relationships now starts through online dating, and for casual sexual encounters, there are Web sites such as Second Love. On the aggressive side of the intimate interaction spectrum are the military drones used by U.S. forces that, having identified and tracked you, can now kill you.

Applications and devices fulfil an increasing number of roles that had been traditionally reserved for human beings. E-coaches encourage us to do more exercise, to conserve energy, or not to be too aggressive while e-mailing. Marketing psychologists no longer directly observe how people respond to advertising, but rather use emotion-recognition software, which is less expensive and more accurate. My son does not only play soccer against his friends, but also against digital heroes like Messi and Van Persie. Digital characters can appear very humanlike. When you kill someone in the latest-generation first-person shooter games (where you view the onscreen action as if through the eyes of a person with a gun), the suffering of the avatar that you’ve just offed is so palpable that you can feel genuine remorse. Meanwhile, the international children’s aid organization Terre des Hommes put Webcam child sex tourism on the public and political agenda in many Western countries by using an online avatar named Sweetie, a virtual 10-year-old girl, to ensnare more than 1,000 online pedophiles in 65 countries. And then there’s Roxxxy, the female-shaped sex robot with a throbbing heart and five adjustable behavioral styles. And for those who find that too impersonal, we can have remote coitus with our own beloved using synthetic genitals that are connected online.

Our devices are also gaining more autonomy. Perhaps they will soon demand it? If we neatly time and plan our meetings in our calendar, Google Now automatically searches available travel routes and gives us a call when the departure time is approaching. We are still waiting for the digital assistant who dutifully worries and asks us whether we’re not too tired for such a late appointment, but an Outlook calendar on your iPhone can warn you that you have a very busy day ahead. Real driverless cars have already travelled thousands of kilometers on public roads in California and in Berlin’s city center. Eventually the U.S. military wants to build drones that can independently make the decision to kill.

We are the new resource

These changes are a new step in the information revolution, when information technology is emerging as intimate technology. Whereas the raw materials of the Industrial Revolution were cotton, coal, and iron ore, the raw material of the intimate technological revolution is us. Our bodies, thoughts, feelings, preferences, conversations, and whereabouts are inputs for intimate technology.

When the Industrial Revolution steamrolled over England and then throughout Europe and the United States, it created enormous havoc with two factors of production: labor and land. The result was social and political paroxysms accompanied by enormous cost and suffering. Those of us who now enjoy the prosperity and material comforts made possible through industrial transformation might judge the pain a reasonable sacrifice for the benefits, but we would be wise to remember how much pain there was and how we might be affected by a similarly convulsive transformation. Now that the Information Revolution thunders even faster over us, keep in mind that we and our children are the most important production factor, that our intimate body and mind are the raw material for new enterprises and capital. And as with the old Industrial Revolution, this one can destabilize the institutions and social arrangements that hold our world together. At stake here are the core attributes of our intimate world, on which our social, political, and economic worlds are built: our individual freedom, our trust in one another, our capacity for good judgment, our ability to choose what we want to focus our attention on. Unless we want to discover what a world without those intimate attributes is going to be like, it is vital that we develop the moral principles to steer the new intimate technological revolution, to lead it in humane ways and divert it from dehumanizing abuses. That is our moral responsibility.

The insight that a technological revolution is turning our intimate lives inside out in ways that demand a moral response is not yet common. But unless such awareness grows quickly, there will be no debate and no policy. And without debate and policy, we the people are at the mercy of the whims and visions not only of the technology creators, the profit makers, and government and security services, but also of the emergent logic of the technological systems themselves, which may have little to do with what their creators intend.

To start this very necessary and overdue debate, I suggest this proposition: Let us accept that we are becoming cyborgs and welcome cyborgian developments that can give us more control over our own lives. But acceptance of a cyborg future does not equal blind embrace. Thoughtlessly embracing all current developments will turn us into good-natured high-tech puppets, apparently happy as we pursue our perfect selves but gradually losing our autonomy. This is the path to a world made not for the difficult strivings of democracy and civil society, but for the perfectly efficient functioning of the marketplace and the security state.

Seven ways to become wild cyborgs

Children and adults need to retain a healthy degree of wildness, cockiness, playfulness, and sometimes annoying idiosyncrasy. We should aspire to be wild cyborgs. The challenge will be to apply intimate technology in such a way that we become human cyborgs. I propose that we adhere to the following seven theses as a guide to our interactions with technology.

1 Without privacy we are nothing. Our data should therefore belong to us. Without privacy we cannot be free, because we cannot choose to act without our choices and our actions being known and thus subject to unseen influence and reaction. Data about our actions and decisions are continuously captured and funnelled by commercial companies, state authorities, and fellow citizens. The large data owners say that’s not bad, and many users just parrot those words because they “have nothing to hide.” But if that is true, why do these same people lock their front door and not talk publicly about their credit card security code? Too much privacy has been lost in recent years. Many people have unwittingly donated their social data to big companies in return for social media services. It is high time that we swap our childlike acceptance of our loss of privacy and autonomy for a strong adult resistance. That implies consciously dealing with the ownership of our personal data, because they are of great economic, personal, and public value.

Over the coming years, the way we will deal with our biological data provides the litmus test for whether we will be able to keep alive the concept of privacy and ensure that our physical and mental integrity are vouchsafed. The first signs are discouraging. Millions of people have already started to hand out their biological data for free to all kinds of companies. Via sensors built into consumer products such as smartphones and exercise monitors, massive amounts of biological data such as fingerprints (for example, to unlock your iPhone 5s), heart rate, emotions, sleep patterns, and sexual activity can be collected. The Advanced Telecom Research Institute (ATR) showed that wrist bands with accelerometers can be used to track more than a hundred specific actions, such as washing hands or giving an injection, such as performed by nurses, for example. Data on how we walk, collected by smart shoes, can be used to identify us, track our health, and even reveal early signs of dementia. We should rapidly become aware of the richness and potential sensitivity of our biological data, in particular in combination with social data. State security services are very interested in that type of information, and so are companies that want to market their products to you or to make decisions about your eligibility for credit, employment, or insurance. The way we handle the privacy of ourselves and others over the next five years will be decisive for how much privacy future our generations will have. We should realize that by abdicating privacy we will lose our freedom.

2 We must be aware of who is presenting information to us and why. Freedom of choice has always been a central value of both the market economy and democracy. Personalization of the supply of information is putting our online freedom of choice—and we are always online—under pressure. With every click or search, we donate to the Internet service providers information about who we are and what we do. That type of information is used to build up individual user profiles, which in turn allow the providers to continually improve their ability to persuade us to do what is in their commercial or political interest and to tailor such persuasive power for each individual. What makes such propaganda and advertisement different from what we have faced up to now is that it is ubiquitous and often invisible. It can also be covertly prescriptive, pushing us to make certain choices, for example with devices that are getting better and better at mimicking human speech, faces, and behaviors to seduce and fool us. For example, psychology experiments suggest that we are particularly open to persuasion by people who look like us. Digital images of one’s own face can now be mixed, or “morphed,” with a second face from an online advertisement in ways that are not consciously discernible but still increase one’s susceptibility to persuasion. So to protect our freedom of choice we have to be aware of the interests at stake and who benefits when we make the choices we are encouraged to make. We should therefore demand that the organizations behind the devices be transparent about the way our information supply is programmed and how the software and interfaces are being used to influence us. Precedent for how to organize this might be found in health care regulations, which require that medicines be accompanied by information on side effects and that doctors base their actions on the informed consent of their patients. Maybe every app should also have an online information sheet that addresses questions such as: How is this piece of software trying to influence its users? Which algorithms are used, and how are they supposed to work?

3 We must be alert to the right of every person to freely make choices about their lives and ambitions. Individualism forms the foundation of our liberal democratic societies. So to a large extent, it should be up to individuals to choose how to employ intimate technologies for pursuing their aspirations. This position is strongly advocated by groups such as transhumanists, bio-hackers, and quantified “selfers” that promote self-emancipation through technology. But there is no such thing as self-realization untouched by mass media, the market, public opinion, and science and technology. What image of our self are we trying to become, and where does that image come from? Many markets thrive on a popular culture that challenges normal people to become perfect, whatever that means.

We are losing the ability to just be ourselves. As more technical means become available to enhance our outward appearance and physical and mental performance—our wrinkleless skin, our rippling abs, our flamboyant sex life, our laserlike concentration—firms will pursue more effective ways to seduce us to strive for a perfection that they define. Having agreed to let marketers tell us how to dress and wear our hair, are we content to have them define how we ought to shape our bodies and minds? Are we really realizing ourselves if we strive to become “perfect” in the image created by marketers? We need to protect the right to be simultaneously very special and very common, without which we may lose the capacity to accept ourselves and others for what we are. We should cherish our human ambition to strive for our own version of perfection, and also nourish ways to accept our human imperfections.

To stay human we have to keep our social and emotional skills, including our ability to trust in people, at a high level.

4 The acts of loving, parenting, caring for, and killing must remain the strict monopoly of real people. The history of industrial advance has also been a history of machine labor replacing human labor. This history has often been to our benefit, as drudgery and danger have been shifted from humans to machines. But as machines acquire more and more human characteristics, both physical (such as realistic avatars and robots) and mental (such as social and emotional skills), we must collectively start addressing the question of whether all the kinds of human activities that could be outsourced to the machine should be outsourced. I believe we should not outsource to machines certain essential human actions, such as killing, marriage, love, and care for children and the sick. Doing so might provide wonderful examples of human ingenuity, but also the perfect formula for our dehumanization, and thus for a future of loneliness. Autonomous drone killing might be possible someday. But because a machine can never be held accountable, we should ensure that decisions on life and death must always be taken by a human being. As humans we are shaped in our intimate relationships with other humans. For example, caring for others helps us to grow by teaching us empathy for those who need care and the value of sacrifice in our lives. If we start to outsource caring on a large part scale to technology, we run the danger of losing a big of the best part of our humanity.

5 We need to keep our social and emotional skills at a high level. Use it or lose it. We all know that if we don’t exercise our physical body we will lose strength and stamina. This is also true for our social and emotional skills, which are developed and maintained through interaction with other people. We are now entering a stage in which technology is taking on a more active role in the way we interact, measuring our emotions and giving us advice about how to communicate with others. In her book Alone Together: Why We Expect More from Technology and Less from Each Other, Sherry Turkle, who for decades has been studying the relationships between people and technologies, argues that the frequent use of information technology by young people is already lessening their social skills. Her fear is that our expectations of other people will gradually decrease, as will our need for true friendship and physical encounters with fellow humans.

There are plenty of signs that we should at least take her warnings seriously. What is at stake is our ability to trust our fellow humans. Our belief that someone is reliable, good, and capable is at the core of the most rewarding relationships we have with another human being. Technology can easily undermine our trust in people; think about emotion meters that check the “true” feelings of your partner, or life-logging technology to check whether what someone is telling you is really true. To stay human we have to keep our social and emotional skills, including our ability to have trust in people, at a high level. If we don’t do that, we run the risk that face-to-face communication may become too intimate an adventure and that our trust in other people will be defined and determined by technology.

6 We have the right not to be measured, analyzed, and coached. There is great value in learning things the hard way, by trial and error. In order to be able to gain new perspectives on life, people need to be given the opportunity to make their own, sometimes stupid and painful, mistakes. New information technologies, from smart toothbrushes and Facebook to digital child dossiers and location tracking apps, provide ample opportunities for parents to track the behavior and whereabouts of their children. But by doing so, they deprive their children of the freedom that helps them to develop into independent adults, with all the ups and downs that go with it. Can a child develop in a healthy moral and psychological way if she knows she is continuously spied upon? Does the digital storage of all our “failures” put in danger the right that we must have to make mistakes? The ability to wipe the slate clean, to forgive ourselves and to be forgiven, to learn and move on, is an important condition for our emotional, intellectual, and moral development. This digital age forces us to ask ourselves how to ensure that we preserve the capacity to forget and to be forgiven.

The more general question is whether it will remain possible to stay out of the cybernetic loop of being continuously measured, analyzed, evaluated, and confronted with feedback. Driven by technology and legitimated by fear of terrorism, the reach of the surveillance state has expanded tremendously over the past decade. At the same time, a big-data business culture has developed in which industry takes for granted, in the name of efficiency and customer convenience, that people can be treated as data resources. This culture flourishes in the virtual world, where Internet service providers and game developers have grown accustomed to following every user’s real-time Web behavior. And as with the Internet, shopkeepers can monitor the behaviour of the customers in their physical shops through wifi tracking. Samsung is monitoring our viewing habits via their smart televisions. And if we start to use computer glasses, Samsung and Google may even monitor at whom and what we glance.

The state is surveying its citizens, companies are surveying their customers, citizens survey each other, and parents and schools use all the means available to survey children. Such a surveillance society is built on fear and mistrust and treats people as objects that can and must be controlled. To safeguard our autonomy and freedom of choice, we should strive for the right not to be measured, analyzed, or coached.

7 We must nurture our most precious possession, our focus of attention. Economics tells us that as human attention becomes an increasingly scarce commodity, the commercial battle for our attention will continue to intensify. Today, “real time is the new prime time,” as we incessantly check our email and texts and equally incessantly send out data for others to check. Many new communications media divert our attention away from everyday reality and toward a commercial environment in which each content provider attempts to optimally monopolize our focus. On the Internet, we all have become familiar with commercial ads that are tailored to our preferences. In the near future, smartphones, watches, eyewear, businesses, and a growing circle of digital contacts will each demand more and more of our attention during everyday activities such as shopping, cooking, or running on the beach. And since attention is a scarce resource, paying attention to one thing will come at the expense of our attention to other things. Descartes articulated our essence, “I think therefore I am,” and the digital age forces us to protect our freedom from continual intrusion and interruption, to guard our own unpolluted thoughts, our capacity to reflect on things in our own way, because that is what we really are. We must cherish what is perhaps our most precious possession, the determinant of our individual identities: our ability to decide what to think about or just to daydream.

The intimate technological revolution will remake us by using as raw material data on our metabolism, our communications, our whereabouts, and our preferences. It will provide many wonderful opportunities for personal and social development. Think of serious games for overcoming the fear of flying, treating schizophrenia, or reducing our energy consumption. But the hybridization of ourselves and our technologies, and the political and economic struggle around this process, threaten to destabilize some qualities of our intimate lives that are also among the core foundations of our civil and moral society: freedom, trust, empathy, forgiveness, forgetting, attention. Perhaps there will be a future world where these qualities are not so important, but it will be unlike our world, and from the perspective of our world it is hard to see what might be left of our humanity. I offer the above seven propositions as a good starting point to further discuss and develop the wisdom that we will need to stay human by becoming wild cyborgs in the 21st century.


Rinie van Est () is coordinator of technology assessment at the Rathenau Instituut in the Netherlands.

The Politics behind China’s Quest for Nobel Prizes

JUNBO YU

China is applying its strategy for winning Olympic gold to science policy. It may be surprised by the outcomes—but overall, the world will benefit.

Skeptics about the capacity of China to join the ranks of the industrialized nations should be challenged by the recent rise of the Chinese high-tech business, including the high-speed train industry, telecommunications service providers such as Huawei, IT service providers such as Lenovo, new market-leading S energy equipment suppliers such as Suntech, and the competitive success and admiration, even fear, that these businesses have spurred across the world. Yet skepticism is not entirely unwarranted. Some inconvenient truths about science and technology development in China stand in the way of its ambitions. Most prominent among these, as noted by Xuesen Qian, the “Father of Chinese Rocketry” (who received his Ph.D. from MIT and returned to China in 1955), is the failure of Chinese universities and research institutes to cultivate world-class creativity and innovation among their scientists. To Chinese leaders, an increasingly aggravating illustration of this truth is that no homegrown scientist from the mainland has claimed a Nobel Prize in Physics, Chemistry, or Medicine.

The Chinese Communist Party (CCP) that today governs the world’s second largest economy and second largest R&D budget is determined to correct this failure. In October 2013, the CCP Organizational Department identified six scientists as China’s “outstanding talents,” the top tier of the “Ten Thousand Talents Program,”and the most likely candidates for a Nobel Prize. Scientists who achieve this rarified level of recognition will benefit from greater autonomy in setting their research agendas, secure research funding to be used at their discretion, and administrated and assessed under terms negotiated directly between the government and scientists.These seemingly ideal privileges are part of the larger effort to promote China’s overall innovation capacity. But only one goal of this program stands out as explicit and measurable: the Nobel Prize.

The state-driven charge toward the Nobel Prize is unprecedented and unparalleled in science policy. Today’s fierce competition among countries for technological advantage, reflected in a diversity of national science, technology, and innovation (STI) policies, has become a bit imprudent and extravagant—national governments are overconfident in their bets on tomorrow’s revolutionary technologies, while the cost/benefit effects of their tremendous inputs are of less concern. But no other country uses the Nobel Prize to anchor the success of a national innovation strategy. Such a strategy appears unbalanced, short-sighted, and utterly antithetical to the principle that creativity and innovation in scientific research must be driven by the curiosity of the scientist. In short, this narrow, nationalistic idea appears to say more about CCP politics than about STI policy. What then are the politics?

The continuing quest for legitimacy

Rulers of authoritarian countries have to justify their legitimacy, and the CCP is no exception. Its legitimacy emerges from several historic sources: first through achieving peace, unity, and freedom from exploitation by Western colonialists in 1949; relying on Mao’s personal charisma amid the Great Famine and the Cultural Revolution; and most recently by delivering rapid GDP growth since the “Reform and Opening Up” of the economy initiated in the late 1970s.

But the Party has pursued other, less apparent or understood strategies for strengthening its legitimacy. One of these is to close the technological gap between China and the West.

Since its defeat during the First Opium War in 1840, all Chinese regimes have suffered military disadvantage due to inferior technological capacity. Constantly bullied and intimidated by various foreign forces, both the elites and the general public have been obliged to pursue effective measures to catch up technologically and thus to improve national security. The “Self-Strengthening Movement” initiated by the Qing Empire from 1861 to 1895 was the first state effort at technology catch-up, with measures ranging from the financing of advanced public education to the establishment of modern arsenals. The eruption of the Sino-Japanese War (1894–1895) terminated this attempt; indeed, the inability of the government to deliver effective technology catch-up partly explains its failure in the war, and drastically intensified the legitimacy crisis of the empire.

The problem of legitimacy was further amplified by the painful war to resist the Japanese invasion between 1931 and 1945. Among Chinese domestically and overseas, a deep desire for national security and technological superiority was growing. The CCP cleverly and effectively took advantage of the combination of widespread feelings of inferiority and the nation’s Cold War frontier position to justify the state’s priority of developing the defense industry at all costs. The atomic bomb, the hydrogen bomb, the intercontinental ballistic missile, the nuclear submarine, the human-made satellite: All of these achievements were tied to tensions between China and the United States or the former Soviet Union, or promoted to mitigate the catastrophe of miscalculated domestic policies, such as the Great Leap Forward or the Cultural Revolution. In retrospect, although the outcomes of the CCP’s major domestic policies between 1949 and 1978 were devastating, even to its own legitimacy, the conspicuous development of China’s defense industry played a crucial yet overlooked role in preserving the CCP’s legitimacy by tapping into the public desire for independence, national security, and technology catch-up.

The end of the Cold War and the diplomatic and economic opening-up begun by Deng Xiaoping, however, have increasingly obliged the CCP to engage in a new strategy for playing the catch-up card. This is first because China’s national security situation clearly improved because of the collapse of the Soviet Union and the establishment of broad mutual economic interests through trading with most of its neighbors and with the United States in particular. Today, despite years of nationalist propaganda about the threat of Taiwanese and Tibetan separatism, territorial disputes with East Asian neighbors, and the U.S. strategy of containment, very few Chinese would consider their national sovereignty to be in danger. Second, as the new “World’s Factory,” China’s relatively low-skilled production model and its dependence on imported technology across various industries have stimulated severe public criticism of the government’s underinvestment in nondefense R&D as well as science and engineering education. The legitimacy conferred by technological catch-up today resides more with the success of economic competitiveness than with growing military strength.

Proving itself to be pragmatic and adaptable, in the early 1980s the CCP launched a series of concerted and escalating policies aimed at convincing the public of its commitment to the pursuit of cutting-edge technological and innovation capabilities. At the input end, the impact of these policies has been extraordinary: A recent study shows that China’s R&D expenditures as a percentage of GDP have increased from below 0.6% in 1982 to nearly 2% in 2013. Assuming a continuation of China’s 18% annual growth rate in R&D spending since 2000, it is now on track to overtake the United States in total (public and private) R&D spending by 2022. Government R&D funds, in particular, have been growing at a pace exceeding the expectations of even the most enthusiastic scientists, and China now not only holds the largest pool of science and engineering researchers in the world, but has triggered a reverse brain drain by luring back talent who studied and worked abroad with the promise of government research largesse.

Instead of focusing on research and building strong links to other scientists and researchers for collaboration, Chinese scientists have developed a strategic culture that focuses on developing vertical networks with powerful bureaucrats.

The landscape at the output end, however, remains genuinely disappointing: Journal publications by Chinese researchers included in the Science Citation Index (SCI) grew from a negligible share in 2001 to 9.5% in 2011, second in the world to the United States, but with few highly cited articles, indicating trivial creative value and low impact on the R&D enterprise. Chinese firms still depend on foreign sources for their core technologies, paying tens of billions of dollars each year to purchase overseas intellectual property. Massive research spending has not led to transformative innovations and products that can be truly commercialized; there is little potential for fostering a Chinese equivalent to Apple. This contrast between inputs and outputs not only makes it clear to the CCP that developing technological and innovation capacity requires more than the mere accumulation of research capital and labor, but also pressures the Party to seek more effective approaches to technological catch-up in its ongoing effort to defend its legitimacy.

Thus, the state is now seeking to replicate what it has learned in another domain of catch-up: Olympic sports. Here, the CCP has impressed the public and bolstered its legitimacy through policies that have moved Chinese athletes into the leading rank of Olympic gold medalists. But have the right lessons been learned? In the sports case, the Party succeeded at the Olympic level despite its refusal to relinquish centralized control over the organization of athletic activities, a failing that has inhibited the development of a truly popular and professional athleticism. And as with sports, the conditions necessary to cultivate scientific and technological creativity and innovation have some inherent conflicts with authority and centralized power that the CCP’s legacy can hardly accommodate. Nonetheless, the successful pursuit of Olympic gold has had a significant impact on the Party’s public image and legitimacy that the CCP believes will apply in the science realm as well. Given the success of what is often referred to as the CCP’s “Olympic strategy,” why not develop a comparable “Nobel Prize strategy”?

Ambivalence about scientists

The fundamental logic of using Nobel Prizes to measure national scientific ability and catalyze innovative creativity is untenable, because it inappropriately equates the exceptional abilities of a very few individuals with the nation’s scientific capacity, while confusing scientific discovery with technological innovation. This strategy will ultimately run into trouble, regardless of whether Chinese scientists selected by the state can claim a couple of Nobels. Yet even if this bold charge toward Nobel Prizes fails to achieve its desired impacts, it may well lead to unintended consequences that will have profound ramifications at the national and international levels.

Unexpected changes are likely to show up first in the relationship between leading scientists and the Party. Historically, the CCP has maintained firm control over Chinese scientists, not only because they are indispensable to improving legitimacy through efforts such as technology catch-up, but also because scientists have a professional inclination toward intellectual freedom, which presents a perennial threat to authoritarian rulers. In light of this tension, the talents of the Chinese science community have best been mobilized when scientists are concentrated in “bunkers,” or megaprojects such as the space program, as endorsed by the Party-state. But scientists are ruthlessly crushed as soon as they engage in political opposition, as occurred throughout Mao’s regime and again after the Tiananmen Square protest of 1989.

Although the CCP has become more tolerant of Western concepts such as transparency, accountability, and public participation in politics, as well as the push for social-political reform from scientists, the Party retains strict control over the careers and resources of scientists by handpicking their leaders and manipulating the allocation of research funds. The result of this state control over the science community unfolds in typical fashion: Instead of focusing on research and building strong links to other scientists and researchers for collaboration, Chinese scientists have developed a strategic culture that focuses on developing vertical networks with powerful bureaucrats. It is an open secret in China that to obtain major grants, doing good research is much less important than holding a chief administrative position or schmoozing with experts enlisted as funding committee members. The discretion of CCP officials outweighs peer review in assessing the significance of research outcomes, so making alliances with bureaucrats is necessary for scientists. This culture of strategic network-building is so pervasive that even returnees from abroad are seldom exempted and quickly become part of it.

As this research culture and the logic of state control that underpins it act to increasingly stifle creativity, corrupt scientific ethics, and squander public money, the Party has begun to realize that it is endangering part of its own legitimacy by compromising China’s potential for continued technology catch-up, including its desire for Nobel Prizes. But allowing self-governance among the scientific community can be harmful to the monopoly of power, and must be denied. Thus the Party harbors deep ambivalence toward independent scientific research, experiencing both desire and fear. The result has been a compromise of principles implemented through the “Ten Thousand Talents Program.”“Outstanding talents” will be given secure funding and expanded autonomy to develop research designs, operations, and evaluations, but the list of these “outstanding talents” is handpicked by the Party’s Organizational Department. This is not to say that Party officials will be the only ones to make the assessments of scientific talent and promise; nonetheless, the political reliability of a scientist will be an additional prerequisite for the reward, and the achievements of the scientist thereafter will entail a debt to the Party. Apparently, the CCP believes it has found a way to resolve its ambivalence: Speed up the national drive toward Nobel Prizes while reserving the Party’s claim to these prizes, and excite elite scientists with extra freedom while retaining de facto control, given their paucity of numbers and manageable monitoring costs.

Appointing national champions and making them heroes and role models for the public are strategies often used by authoritarian states to mobilize political campaigns. However, on many occasions, such as the defections of former Soviet Union and Eastern Bloc celebrities, including artists, athletes, and scientists during the cold war, they end up destabilizing the regime by creating and escalating tensions between the state’s will and the liberal reasoning of heroic individuals. Indeed, this is one lesson from the experience of sports, as the new generation of Chinese stars, such as Li Na (tennis), Yao Ming (basketball), and Liu Xiang (track and field), no longer shy away from expressing their reservations about the Party’s obsession with gold medals and openly criticize how state intervention has distorted genuine athletics. The push for gold at the Olympic Games, and the drive for Nobel Prizes, may wellwork against the authoritarian state in the end. Superiority on level playing fields among nations cannot be sustained without openness, and once openness begins, however cautiously at first, it threatens the monopoly of information by the authoritarian state and may eventually become irreversible.

In reality, the Chinese scientists selected as “outstanding talents” will have to be given additional autonomy so that their research can fully conform to international research norms and can be recognized in the competition for Nobel Prizes. As their integration into the international science community increases, with or without Nobel Prizes, these scientists and their domestic collaborators, assistants, and students will find themselves pushing back against and likely rejecting the norms of the Chinese research culture.Through their actions and activities if not their voices, these scientists will become a force for reform. Although the Party’s Nobel strategy for chasing legitimacy and national pride may thus serve its purpose in the short term, in the long run it must inevitably threaten the Party’s ability to maintain control over public discourse and thus over civil society.

Who would benefit?

China’s political commitment to science and innovation serves a political purpose in America, too. The threat of Chinese competition on the advanced technological front, which I highlighted at the beginning of this essay, is often invoked by U.S. policy, scientific, and business leaders as a reason to increase government spending on research and innovation, to bolster immigration policies that encourage foreign-born scientists to stay in the United States, and so on. In a 2011 Wall Street Journal op-ed, Microsoft’s former chief operating officer went so far as to suggest that the United States was more like a developing country than China, if judged by the latter’s robust investments in its emerging scientific and innovation prowess.

Here my point is different. U.S. policymakers should welcome and support the Party’s dash for Nobel Prizes. In betting on Nobels, the Party faces a critical tradeoff between legitimacy and authority. If the Party hopes to enhance its legitimacy through the reputational benefits of future Nobel laureates, it must first unlock the potential of Chinese scientists by allowing more autonomy and reducing interventions, which in turn will inevitably diminish the Party’s authority. The CCP’s temporary solution seems to be autonomy preconditioned on alleged loyalty and confined to a limited number of scientists. However, being intellectuals, the more these leading Chinese scientists are engaged with the international science community, the more needs, resources, and strength they can mobilize to overcome the institutional weaknesses of China’s research culture and push for domestic political reform. Such processes will oblige the state to become more accountable to its public, thus empowering and stabilizing a transition away from authoritarianism. This is not to say that the CCP won’t use the outcome of its Nobel campaign to boost its image or nationalist pride. But to the extent that U.S. politicians seek to exploit the threat of Chinese scientific competition to advance their own political ends, they will actually lend credence to the pursuit of similar ends by the CCP.

I am arguing that China’s bet on the Nobel Prizes will subject its research system to an open, liberal, and rules-based competition that will erode the Party’s authority in the long run. But I am also suggesting that U.S. fears about Chinese scientific and technological competitiveness remain overstated in the first place. As long as the CCP is reluctant to pursue enhanced legitimacy through genuine political reform, the Chinese research culture I described earlier will tend to prevail among most Chinese scientists and researchers, due to the powerful incentives of political patronage. This culture dampens the prospects for real scientific breakthroughs. Nor is it unrelated to China’s poor performance in commercializing and industrializing inventions, the result of an “industrial strategic culture” that privileges firms with political patrons and effectively promotes a destructive entrepreneurship that prefers short-term profits, cronyism, and excessive diversification. So even if, by privileging a few of its top scientists, the Party does manage to garner some Nobel Prizes, the entrenched culture of science and innovation in China may still help industrialized countries with more efficient innovation systems to benefit from China’s Nobel science more than would China itself.

Nobel Prizes are awarded on the basis of creative merit, as well as the breadth of impact on scientific research and human welfare. Indeed, a scientific or technological advance will usually be exposed to the world for a lengthy period of scrutiny, often including complex processes of commercialization and industrialization, before its worthiness for a Nobel Prize can be fully assessed and endorsed. This complex context within which Nobel-worthy research results are in effect subject to a test of scientific and societal significance rules out the possibility of protectionism of scientific discoveries on China’s part, as that would undermine the competitive potential of the discoveries. Unlike gold medals in the Olympic Games, which have limited value other than fame and national pride, the research that is rewarded by Nobel Prizes in science has vast spillover effects for human welfare that cannot be exclusively exploited by the winners. The Chinese nationality of a Nobel Prize winner imposes no extra constraints on the ability of the rest of the world to benefit from the scientist’s breakthrough. Meanwhile, the state-organized infringement of intellectual property and rampant industrial espionage that have in the past antagonized developed countries in their dealings with Chinese partners also become less tempting because they undermine the broader effort to establish a national reputation for scientific originality.

In contrast to its enthusiasm for Nobel Prizes in science, the Party sought to suppress public discussion of the Nobel Peace Prize awarded to the incarcerated political activist Liu Xiaobo. It also toned down the official reaction to Mo Yan’s Nobel Prize for Literature. These orchestrated reactions mirror the Party’s longstanding aversion to public discourse on universal values, and its preference for value-neutral products (such as athletic prowess and, supposedly, scientific breakthroughs) as proof of its competencies and authority. However, scientists as human beings cannot be value-free or value-neutral. The fundamental paradox in China’s bet on Nobel Prizes is that the Party must free its scientists and researchers from their patron/client professional culture before their creativity and potential can be released. But if the Party were to remove restrictions on its scientists in order to strengthen its legitimacy, a more internationalized and independent science community will result, with a consequently increasing influence on domestic civil society that in turn will raise more diversified challenges to the Party’s legitimacy. China’s Nobel Prize strategy is a thus classic win-win for the United States and the West, whose policymakers should do what they can to encourage it.The strategy cannot damage Western countries’ competitive edge in science and technology in the short run. In the long run, it will foster China’s independent science community, substantiate its civil society, gradually liberalize its politics, and in the process might even lead to scientific advances from which all of humanity can benefit. These are outcomes that advance the long-term interests of both America and China.

Recommended reading

George J. Gilboy, “The Myth behind China’s Miracle,” Foreign Affairs, no. 4 (2004): 33–48.

David M. Lampton, “How China Is Ruled,” Foreign Affairs 93, no. 1 (February 1, 2014): 74–84.

Yigong Shi and Yi Rao, “China’s Research Culture,” Science, no. 5996 (September 3, 2010): 1128–1128.

Ning Wang, “The Making of an Intellectual Hero: Chinese Narratives of Qian Xuesen,” The China Quarterly 206 (2011): 352–371.

Jingjie Yang, “Science Talent Program Eyes Noble Prize,” Global Times, October 31, 2013.

David Zweig and Huiyao Wang, “Can China Bring Back the Best? The Communist Party Organizes China’s Search for Talent,” The China Quarterly 215 (2013): 590–615.


Junbo Yu () is an associate professor in public policy at the School of Administration,Jilin University, in Changchun, China, and a research fellow at the Peking University-Fudan University-Jilin University Co-Innovation Center for State Governance.

The New Visible Hand: Understanding Today’s R&D Management

The New Visible Hand: Understanding Today’s R&D Management

Recent decades have seen dramatic if not revolutionary changes in the organization and management of knowledge creation and technology development in U.S. universities. Market demands and public values conjointly influence and in many cases supersede the disciplinary interests of academic researchers in guiding scientific and technological inquiry toward social and economic ends. The nation is developing new institutions to convene diverse sets of actors, including scientists and engineers from different disciplines, institutions, and economic sectors, to focus attention and resources on scientific and technological innovation (STI). These new institutions have materialized in a number of organizational forms, including but not limited to national technology initiatives, science parks, technology incubators, cooperative research centers, proof-of-concept centers, innovation networks, and any number of what the innovation ecosystems literature refers to generically (and in most cases secondarily) as “bridging institutions.”

The proliferation of bridging institutions on U.S. campuses has been met with a somewhat bifurcated response. Critics worry that this new purpose will detract from the educational mission of universities; advocates see an opportunity for universities to make an additional contribution to the nation’s well-being. The evidence so far indicates that bridging institutions on U.S. campuses have not diminished either the educational or knowledge-creation activities. Bridging institutions on U.S. campuses complement rather than substitute for traditional university missions and over time may prove critical pivot points in the U.S. innovation ecosystem.

The growth of bridging institutions is a manifestation of two larger societal trends. The first is that the source of U.S. global competitive advantage in STI is moving away from a simple superiority in certain types of R&D to a need to effectively and strategically manage the output of R&D and integrate it more rapidly into the economy through bridging institutions. The second is the need to move beyond the perennial research policy question of whether or not the STI process is linear, to tackle the more complex problem of how to manage the interweaving of all aspects of STI.

The visible hand

This article’s title harkens back to Alfred Chandler’s landmark book The Visible Hand: The Managerial Revolution in U.S. Business. In that book, Chandler makes the case that the proliferation of the modern multiunit business enterprise was an institutional response to the rapid pace of technological innovation that came with industrialization and increased consumer demand. For Chandler, what was revolutionary was the emergence of management as a key factor of production for U.S. businesses.

Similarly, the proliferation of bridging institutions on U.S. campuses has been an institutional response to the increasing complexity of STI and also to public demand for problem-focused R&D with tangible returns on public research investments. As a result, U.S. departments and agencies supporting intramural and extramural R&D are now very much focused on establishing bridging institutions—and in the case of proof-of-concept centers, bridging institutions for bridging institutions—involving experts from numerous scientific and engineering disciplines from academia, business, and government.

All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

To name just a few, the National Science Foundation (NSF) has created multiple cooperative research center programs and recently added the I-Corps program for establishing regional networks for STI. The Department of Energy (DOE) has its Energy Frontier Research Centers and Energy Innovation Hubs. The National Institutes of Health (NIH) have Translational Research Centers and also what they refer to as “team science.” The Obama administration has its Institutes for Manufacturing Innovation. But this is only a tiny sample. The Research Centers Directory counts more than 8,000 organized research units for STI in the United States and Canada, and over 16,000 worldwide. This total includes many traditional departmental labs, where management is not as critical a factor, but a very large number are bridging institutions created to address management concerns.

The analogy between Chandler’s observations about U.S. business practices and the proliferation of bridging institutions on U.S. campuses is not perfect. Whereas Chandler’s emphasis on management in business had more to do with the efficient production and distribution of routine and standard consumer goods and services, the proliferation of bridging institutions on U.S. campuses has had more to do with effective and commercially viable (versus efficient) knowledge creation and technology development, which cannot be routinized by way of management in the same way as can, say, automobile manufacturing.

Nevertheless, management—albeit a less formal kind of management than that Chandler examines—is now undeniably a key factor of production for STI on U.S. campuses. Many nations are catching up with the United States in the percentage of their gross domestic product devoted to R&D, so that R&D alone will not be sufficient to sustain U.S. leadership. The promotion of organizational cultures enabling bridging institutions to strategically manage social network ties among diverse sets of scientists and engineers toward coordinated problem-solving is what will help the United States maintain global competitive advantage in STI.

Historically, U.S. research policy has focused on two things with regard to universities to help ensure the U.S. status as the global STI hegemon. First, it has made sure that U.S. universities have had all the usual “factors of production” for STI, e.g., funding, technology, critical materials, infrastructure, and the best and the brightest in terms of human capital. Second, U.S. research policy has encouraged university R&D in applied fields by, for example, allowing universities to obtain intellectual property rights emerging from publicly funded R&D. In the past, then, an underlying assumption of U.S. research policy was that universities are capable of and willing to conduct problem-focused R&D and to bring the fruits of that research to market if given the funds and capital to do the R&D, as well as ownership of any commercial outputs.

But U.S. research policy regarding universities has been imitated abroad, and for this reason, among others, many countries have closed the STI gap with the United States, at least in particular technology areas. One need read only one or both of the National Academies’ Gathering Storm volumes to learn that the U.S. is now on a more level playing field with China, Japan, South Korea, and the European Union in terms of R&D spending in universities, academic publications and publication quality, academic patents and patent quality, doctorate production, and market share in particular technology areas. Quibbles with the evidentiary bases of the Gathering Storm volumes notwithstanding, there is little arguing that the United States faces increased competition in STI from abroad.

Although the usual factors of production for STI and property rights should remain components of U.S. research policy, these are no longer adequate to sustain U.S. competitive advantage. Current and future U.S. research policy for universities must emphasize factors of production for STI that are less easily imitated, namely organizational cultures in bridging institutions that are conducive to coordinated problem-solving. An underlying assumption of U.S. research should be that universities for the most part cannot or will not go it alone commercially even if given the funds, capital, and property rights to do so (there are exceptions, of course), but rather that they are more likely to navigate the “valley of death” in conjunction with businesses, government, and other universities.

Encouraging cross-sector, inter-institutional R&D in the national interest must become a major component of U.S. research policy for universities, and bridging institutions must play a central role. Anecdotal reports suggest that bridging institutions differ widely in their effectiveness, but one of the challenges facing the nation is to better understand the role that management plays in the success of bridging institutions. Calling something a bridging institution does not guarantee that it will make a significant contribution to meeting STI goals.

The edge of the future

The difference between the historic factors of production for STI discussed above and organizational cultures in bridging institutions is that the former are static, simple, and easy to imitate, whereas the latter are dynamic, complex, and difficult to observe, much less copy. This is no original insight. The business literature made this case originally in the 1980s and 1990s. A firm’s intangible assets, its organizational culture and the tacit norms and expectations for organizational behavior that this entails, can be and oftentimes are a source of competitive advantage because they are difficult to measure and thus hard for competing firms to emulate.

University leaders and scholars have recognized that bridging institutions on U.S. campuses can be challenging to organize and manage and that the ingredients for an effective organizational culture are still a mystery. There is probably as much literature on the management challenges of bridging institutions as there is on their performance. Whereas the management of university faculty in traditional academic departments is commonly referred to as “herding cats,” coordinating faculty from different disciplines and universities, over whom bridging institutions have no line authority, to work together and also to cooperate with industry and government is akin to herding feral cats.

But beyond this we know next to nothing about the organizational cultures of bridging institutions. The cooperative research centers and other types of bridging institutions established by the NSF, DOE, NIH, and other agencies are most often evaluated for their knowledge and technology outcomes and, increasingly, for their social and economic impact, but seldom have research and evaluation focused on what’s inside the black box. All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

Developing an understanding of organizational cultures in bridging institutions is important not just because these can be relatively tacit and difficult to imitate, but additionally because other, more formal aspects of the management of bridging institutions are less manipulable. Unlike Chandler’s emphasis on formal structures and authorities in U.S. businesses, bridging institutions do not have many layers of hierarchy, nor do they have centralized decisionmaking. As organizations focused on new knowledge creation and technology development, bridging institutions typically are flat and decentralized, and therefore vary much more culturally and informally than structurally.

There are frameworks for deducing the organizational cultures of bridging institutions. One is the competing values framework developed by Kim Cameron and Robert Quinn. Another is organizational economics’ emphasis on informal mechanisms such as resource interdependencies and goal congruence. A third framework is the organizational capital approach from strategic human resources management. These frameworks have been applied in the business literature to explore the differences between Silicon Valley and Route 128 microcomputer companies, and they can be adapted for use in comparing the less formal structures of bridging institutions.

What’s more, U.S. research policy must take into account how organizational cultures in bridging institutions interact with “best practices.” We know that in some instances, specific formalized practices are associated with successful STI in bridging institutions, but in many other cases, these same practices are followed in unsuccessful institutions. For bridging institutions, best practices may be best only in combination with particular types of organizational culture.

Inside the black box

The overarching question that research policy scholars and practitioners should address is what organizational cultures lead to different types of STI in different types of bridging institutions. Most research on bridging institutions emphasizes management challenges and best practices, and the literature on organizational culture is limited. We need to address in a systematic fashion how organizational cultures operate to coordinate diverse sets of scientists and engineers toward coordinated problem-solving.

Specifically, research policy scholars and practitioners should address variation across the “clan” type of organizational culture in bridging institutions. To the general organizational scholar, all bridging institutions have the same culture: decentralized and nonhierarchical. But to research policy scholars and practitioners, there are important differences in the organization and management of what essentially amounts to collectives of highly educated volunteers. How is it that some bridging institutions elicit tremendous contributions from academic faculty and industry researchers, whereas others do not? What aspects of bridging institutions explain what enables academic researchers to work with private companies, spin off their own companies, and/or patent?

These questions point to related questions about different types of bridging institutions. There are research centers emphasizing university/industry interactions for new and existing industries, university technology incubators and proof-of-concept centers focused on business model development and venture capital, regional network nodes for STI, and university science parks co-locating startups and university faculty. Which of these bridging institutions are most appropriate for which sorts of STI? When should bridging institutions be interdisciplinary, cross-sectoral, or both? Are the different types of bridging institutions complements or substitutes for navigating the “valley of death?”

Research policy scholars and practitioners have their work cut out for them. There are no general data tracking cultural heterogeneity across bridging institutions. What data do exist, such as the Research Centers Directory compiled by Gale Research/Cengage Learning, track only the most basic organizational features. Other approaches such as the science of team science hold more promise, though much of this work emphasizes best practices and does not address organizational culture systematically. Research policy scholars and practitioners must develop new data sets that track the intangible cultural aspects of bridging institutions and connect these data to publicly available outcomes data for new knowledge creation, technology development, and workforce development.

Developing systematic understanding of bridging institutions is fundamental to U.S. competitiveness in STI. It is fundamental because bridging institutions are where the rubber hits the road in the U.S. innovation ecosystem. Bridging institutions provide forums for our nation’s top research universities, firms, and government agencies to exchange ideas, engage in coordinated problem solving, and in turn create new knowledge and develop new technologies addressing social and economic problems.

Developing systematic understanding of bridging institutions will be challenging because they are similar on the surface but different in important ways that are difficult to detect. During the 1980s, scholars identified striking differences in the organizational cultures of Silicon Valley and Route 128 microcomputing companies. Today, most bridging institutions follow a similar decentralized model for decisionmaking, with few formalized structures and authorities, yet they can differ widely in performance.

The most important variation across bridging institutions is to be found in the intangible, difficult–to-imitate qualities that allow for (or preclude) the coordination of diverse sets of scientists and engineers from across disciplines, institutions, and sectors. But this does not mean that scholars and practitioners should ignore the structural aspects of bridging institutions. In some cases, bridging institutions may exercise line authority over academic faculty (such as faculty with joint appointments), and these organizations may (or may not) outperform similar bridging institutions that do not exercise line authority.


Craig Boardman () is associate director of the Battelle Center for Science and Technology Policy in the John Glenn School of Public Affairs at Ohio State University.

Is U.S. Science in Decline?

The nation’s position relative to other countries is changing, but this need not be reason for alarm.

Who are the most important U.S. scientists today?” Our host posed the question to his guests at a dinner that I attended in 2003. Americans like to talk about politicians, entertainers, athletes, writers, and entrepreneurs, but rarely, if ever, scientists. Among a group of six academics from elite U.S. universities at the dinner, no one could name a single outstanding contemporary U.S. scientist.

This was not always so. For much of the 20th century, Albert Einstein was a household-name celebrity in the United States, and every academic was familiar with names such as James Watson, Enrico Fermi, and Edwin Hubble. Today, however, Americans’ interest in pure science, unlike their interest in new “apps,” seems to have waned. Have the nation’s scientific achievements and strengths also lessened? Indeed, scholars and politicians alike have begun to worry that U.S. science may be in decline.

If the United States loses its dominance in science, historians of science would be the last group to be surprised. Historically, the world center of science has shifted several times, from Renaissance Italy to England in the 17th century, to France in the 18th century, and to Germany in the 19th century, before crossing the Atlantic in the early 20th century to the United States. After examining the cyclical patterns of science centers in the world with historical data, Japanese historian of science Mitsutomo Yuasa boldly predicted in1962 that “the scientific prosperity of [the] U.S.A., begun in 1920, will end in 2000.”

Needless to say, Yuasa’s prediction was wrong. By all measures, including funding, total scientific output, highly influential scientific papers, and Nobel Prize winners, U.S. leadership in science remains unparalleled today. Containing only 5% of the world’s total population, the United States can consistently claim responsibility for one- to two-thirds of the world’s scientific activities and accomplishments. Present-day U.S. science is not a simple continuation of science as it was practiced earlier in Europe. Rather, it has several distinctive new characteristics: It employs a very large labor force; it requires a great deal of funding from both government and industry; and it resembles other professions such as medicine and law in requiring systematic training for entry and compensating for services with financial, as well as nonfinancial, rewards. All of these characteristics of modern science are the result of dramatic and integral developments in science, technology, industry, and education in the United States over the course of the 20th century. In the 21st century, however, a debate has emerged concerning U.S. ability to maintain its world leadership in the future.

The U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force.

The debate involves two opposing views. The first view is that U.S. science, having fallen victim to a new, highly competitive, globalized world order, particularly to the rise of China, India, and other Asian countries, is now declining. Proponents of this alarmist view call for significantly more government investment in science, as stated in two reports issued by the National Academy of Sciences (NAS), the National Academy of Engineering, and the Institute of Medicine: Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future in 2007, and Rising Above the Gathering Storm: Rapidly Approaching Category 5 in 2010.

The second view is that if U.S. science is in trouble, this is because there are too many scientists, not too few. Newly trained scientists have glutted the scientific labor market and contribute low-cost labor to organized science but are unable to become independent and, thus, highly innovative. Proponents of the second view, mostly economists, are quick to point out that claims concerning a shortage of scientific personnel are often made by interest groups—universities, senior scientists, funding agencies, and industries that employ scientifically trained workers—that would benefit from an increased supply of scientists. This view is well articulated in two reports issued by the RAND Corporation in 2007 and 2008 in response to the first NAS report and economist Paula Stephan’s recent book, How Economics Shapes Science.

What do data reveal?

Which view is correct? In a 2012 book I coauthored with Alexandra Killewald, Is American Science in Decline?, we addressed this question empirically, drawing on as much available data as we could find covering the past six decades. After analyzing 18 large, nationally representative data sets, in addition to a wealth of published and Web-based materials, we concluded that neither view is wholly correct, though both have some merit.

Between the 1960s and the present, U.S. science has fared reasonably well on most indicators that we can construct. The following is a summary of the main findings reported in our book.

First, the U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force. Census data show that the scientific labor force has increased steadily since the 1960s. In 1960, science and engineering constituted 1.3% of the total labor force of about 66 million. By 2007, it was 3.3% of a much larger labor force of about 146 million. Of course, between 1960 and 2007, the share of immigrants among scientists increased, at a time when all Americans were becoming better educated. As a result, the percentage of scientists among native-born Americans with at least a college degree has declined over time. However, diversity has increased as women and non-Asian minorities have increased their representation among U.S. scientists.

Second, despite perennial concerns about the performance of today’s students in mathematics and science, today’s U.S. schoolchildren are performing in these areas as well as or better than students in the 1970s. At the postsecondary level, there is no evidence of a decline in the share of graduates receiving degrees in scientific fields. U.S. universities continue to graduate large numbers of young adults well trained in science, and the majority of science graduates do find science-related employment. At the graduate level, the share of foreign students among recipients of science degrees has increased over time. More native-born women now receive science degrees than before, although native-born men have made no gains. Taken together, education data suggest that Americans are doing well, or at least no worse than in the past, at obtaining quality science education and completing science degrees.

Finally, we used a large number of indicators to track changes in society’s general attitudes toward science, including confidence in science, whether to fund basic science, scientists’ prestige, and freshmen interest in science research careers. Those indicators all show that the U.S. public has remained overwhelmingly positive toward scientists and science in general. About 80% of Americans endorse federal funding for scientific research, even if it has no immediate benefits, and about 70% believe that the benefits of science outweigh the costs. These numbers have stayed largely unchanged over recent decades. Americans routinely express greater confidence in the leadership of the scientific community than that of Congress, organized religion, or the press.

Is it possible that Americans support science even though they themselves have no interest in it? To measure Americans’ interest in science, we also analyzed all cover articles published in Newsweek magazine and all books on the New York Times Best Sellers List from 1950 to 2007. From these data, we again observe an overall upward trend in Americans’ interest in science.

Sources of anxiety

What, then, are the sources of anxiety about U.S. science? In Is American Science in Decline?, we identify three of them, two historical and one comparative. First, our analysis of earnings using data from the U.S. decennial censuses revealed that scientists’ earnings have grown very slowly, falling further behind those of other high-status professionals such as doctors and lawyers. This unfavorable trend is particularly pronounced for scientists at the doctoral level.

Second, scientists who seek academic appointments now face greater challenges. Tenure-track positions are in short supply relative to the number of new scientists with doctoral training seeking such positions. As a result, more and more young scientists are now forced to take temporary postdoctoral appointments before finding permanent jobs. Job prospects are particularly poor in biomedical science, which has been well supported by federal funding through the National Institutes of Health. The problem is that the increased spending is mainly in the form of research grants that enhance research labs’ ability to hire temporary research staff, whereas universities are reluctant to expand permanent faculty positions. Some new Ph.D.s in biomedical fields need to take on two or more postdoctoral or temporary positions before having a chance to find a permanent position. It is the poor job outlook for these new Ph.D.s and their relatively low earnings that has led some economists to argue that there is a glut of scientists in the United States.

Third, of course, the greatest source of anxiety concerning U.S. science has been the globalization of science, resulting in greater competition from other countries. Annual news releases reveal the mediocre performance of U.S. schoolchildren on international tests of math and science. The growth of U.S. production of scientific articles has slowed down considerably over the past several decades as compared with that in other areas, particularly East Asia. As a result, the share of world science contributed by the United States is dwindling.

But in some ways, the globalization of science is a result of U.S. science’s success. Science is a public good, and a global one at that. Once discovered, science knowledge is codified and then can be taught and consumed anywhere in the world. The huge success of U.S. science in the 20th century meant that scientists in many less developed countries, such as China and India, could easily build on the existing science foundation largely built by U.S. scientists and make new scientific discoveries. Internet communication and cheap air transportation have also minimized the importance of location, enabling scientists in less developed countries to have access to knowledge, equipment, materials, and collaborators in more developed countries such as the United States.

The globalization of science has also made its presence felt within U.S. borders. More than 25% of practicing U.S. scientists are immigrants, up from 7% in 1960. Almost half of students receiving doctoral degrees in science from U.S. universities are temporary residents. The rising share of immigrants among practicing scientists and engineers indicates that U.S. dependence on foreign-born and foreign-trained scientists has dramatically increased. Although most foreign recipients of science degrees from U.S. universities today prefer to stay in the United States, for both economic and scientific reasons, there is no guarantee that this will last. If the flow of foreign students to U.S. science programs should stop or dramatically decline, or if most foreign students who graduate with U.S. degrees in science should return to their home countries, this could create a shortage of U.S. scientists, potentially affecting the U.S. economy or even national security.

What’s happening in China?

Although international competition doesn’t usually refer to any specific country in discussions of science policy, today’s discourse does tend to refer, albeit implicitly, to a single country: China. In 2009, national headlines revealed that students in Shanghai outscored their peers around the world in math, science, and reading on the Program for International Student Assessment (PISA), a test administered to 15-year-olds in 65 countries. In contrast, the scores of U.S. students were mediocre. Although U.S. students had performed similarly on these comparative tests for a long time, the 2009 PISA results had an unusual effect in sparking a national discussion of the proposition that the United States may soon fall behind China and other countries in science and technology. Secretary of Education Arne Duncan referred to the results as “a wake-up call.”

China is the world’s largest country, with a population of 1.3 billion, and grew economically at an annualized growth rate of 7.7% between 1978 and 2010. Other indicators also suggest that China has been developing its science and technology with the intention of narrowing the gap between itself and the United States. Activities in China indicate its inevitable rise as a powerhouse in science and technology, and it is important to understand what this means for U.S. science.

The Chinese government has spent large sums of money trying to upgrade Chinese science education and improve China’s scientific capability. It more than doubled the number of higher education institutions from 1,022 in 1998 to 2,263 in 2008 and upgraded about 100 elite universities with generous government funding. China’s R&D expenditure has been growing at 20% per year, benefitting both from the increase in gross domestic product (GDP) and the increase in the share of GDP spent on R&D. In addition, the government has devised various attractive programs, such as the Changjiang Scholars Program and the Thousand Talent Program, to lure expatriate Chinese-born scientists, particularly those working in the United States, back to work in China on a permanent or temporary basis.

The government’s efforts to improve science education seem to have paid off. China is now by far the world’s leader in bachelor’s degrees in science and engineering, with 1.1 million in 2010, more than four times the U.S. number. This large disparity reflects not only China’s dramatic expansion in higher education since 1999 but also the fact that a much higher percentage of Chinese students major in science and engineering, around 44% in 2010, compared to 16% in the United States. Of course, China’s population is much larger. Adjusting for population size differences, the two countries have similar proportions of young people with science and engineering bachelor’s degrees. China’s growth in the production of science and engineering doctoral degrees has been comparably dramatic, from only 10% of the U.S. total in 1993 to a level exceeding that in the United States by 18% in 2010. Of course, questions have been raised both in China and abroad about whether the quality of a Chinese doctoral degree is equivalent to that of a U.S. degree.

The impact of China’s heavy investment in scientific research is also unmistakable. Data from Thomson Reuters’ InCites and Essential Science Indicators databases indicate that China’s production of scientific articles grew at an annual rate of 15.4% between 1990 and 2011. In terms of total output, China overtook the United Kingdom in 2004, and Japan and Germany in 2005, and has since remained second only to the United States. The data also reveal that the quality of papers produced by Chinese scientists, measured by citations, has increased rapidly. China’s production of highly cited articles achieved parity with Germany and the United Kingdom around 2009 and reached a level of 31% the U.S. rate in 2011.

Four factors favor China’s rise in science: a large population and human capital base, a large diaspora of Chinese-origin scientists, a culture of academic meritocracy, and a centralized government willing to invest in science. However, China’s rise in science also faces two major challenges: a rigid, top-down administration system known for misallocating resources, and rising allegations of scientific misconduct in a system where major decisions about funding and rewards are made by bureaucrats rather than peer scientists. Given these features, Chinese science is likely to do well in research areas where research output depends on material and human resources; i.e., extensions of proven research lines rather than truly innovative advances into unchartered territories. Given China’s heavy emphasis on its economic development, priority is also placed on applied rather than basic research. These characteristics of Chinese science mean that U.S. scientists could benefit from collaborating with Chinese scientists in complementary and mutually beneficial ways. For example, U.S. scientists could design studies to be tested in well-equipped and well-staffed laboratories in China.

Science in a new world order

Science is now entering a new world order and may have changed forever. In this new world order, U.S. science will remain a leader but not in the unchallenged position of dominance it has held in the past. In the future, there will no longer be one major world center of science but multiple centers. As more scientists in countries such as China and India actively participate in research, the world of science is becoming globalized as a single world community.

A more competitive environment on the international scene today does not necessarily mean that U.S. science is in decline. Just because science is getting better in other countries, this does not mean that it’s getting worse in the United States. One can imagine U.S. science as a racecar driver, leading the pack and for the most part maintaining speed, but anxiously checking the rearview mirror as other cars gain in the background, terrified of being overtaken. Science, however, is not an auto race with a clear finish line, nor does it have only one winner. On the contrary, science has a long history as the collective enterprise of the entire human race. In most areas, scientists around the world have learned from U.S. scientists and vice versa. In some ways, U.S. science may have been too successful for its own good, as its advancements have improved the lives of people in other nations, some of which have become competitors for scientific dominance.

Hence, globalization is not necessarily a threat to the wellbeing of the United States or its scientists. As more individuals and countries participate in science, the scale of scientific work increases, leading to possibilities for accelerated advancements. World science may also benefit from fruitful collaborations of scientists in different environments and with different perspectives and areas of expertise. In today’s ever more competitive globalized science, the United States enjoys the particular advantage of having a social environment that encourages innovation, values contributions to the public good, and lives up to the ideal of equal opportunity for all. This is where the true U.S. advantage lies in the long run. This is also the reason why we should remain optimistic about U.S. science in the future.

Recommended reading

J. M. Diamond, Guns, Germs and Steel: The Fates of Human Societies (New York: W.W. Norton & Company, 1999).

Thomas Friedman, The World Is Flat: A Brief History of the Twenty-First Century (New York: Farrar, Straus, and Giroux, 2005).

Titus Galama and James R. Hosek, eds., Perspectives on U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2007).

Titus Galama and James R. Hosek, eds., U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2008).

Claudia Dale Goldin and Lawrence F. Katz, The Race between Education and Technology (Cambridge, MA: Belknap Press of Harvard University Press, 2008).

Alexandra Killewald and Yu Xie, “American Science Education in its Global and Historical Contexts,” Bridge (Spring 2013): 15-23.

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future (Washington, DC: National Academies Press, 2007).

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Rapidly Approaching Category 5 (Washington, DC: National Academies Press, 2010).

Organisation for Economic Co-operation and Development, PISA 2009 Results: Executive Summary (2010); available online at www.oecd.org/pisa/pisaproducts/46619703.pdf.

Paula Stephan, How Economics Shapes Science (Cambridge, MA: Harvard University Press, 2012).

Yu Xie and Alexandra A. Killewald, Is American Science in Decline? (Cambridge, MA: Harvard University Press, 2012).


Yu Xie is Otis Dudley Duncan Distinguished University Professor of Sociology, Statistics, and Public Policy at the University of Michigan. This article is adapted from the 2013 Henry and Bryna David Lecture, which he presented at the National Academy of Sciences.

Reconstructing the View

The landscape has been a source of artistic exploration and contemplation since the earliest cave drawings. Represented in paintings and photography as well as film and the tourist’s snapshot, a variety of perspectives have all contributed to building within our collective imagination a sense of the places we inhabit and visit, potentially sparking our awe and imagination. Add to that the information gathered by the observations of geologists, cartographers, seismologists, and others trained in scientific observation, and we have a multifaceted and layered understanding of the land. An informed artist can remind us of how our perceptions are constructed and thus cast new light on the debates that arise over the meaning and value of particular landscapes and the importance of protecting them.

Since 1995, the collaborative team of photographers Mark Klett and Byron Wolfe has explored questions of constructed perception, time, and change. As early as 1997, they focused their visual inquiry on the Grand Canyon and surrounding areas. They analyzed the work of early creative practitioners who have documented the region for various purposes and identified the exact locations portrayed in these historic photographs and drawings. For example, they discovered that the 1882 lithograph by draftsman William Henry Holmes of the view of the Marble Canon Platform was so precise that it allowed them to create and insert new images into the original, matching the forms. The circular images that Klett and Wolfe chose to insert in this particular piece were taken through a military spotting scope, suggesting another perspective in viewing the land. From the exact same geographic point used by Holmes, they created a new photograph that incorporates the original view. A digital version of the historic image was inserted within the contemporary photograph, asking the viewer to consider the changes that have happened over time, not only in the land but in our perception of it.

This artistic exploration resulted in a body of work published in the book Reconstructing the View: The Grand Canyon Photographs of Mark Klett and Byron Wolfe. Wolfe is the Program Director for Photography at the Tyler School of Art Center for the Arts at Temple University in Philadelphia, Pennsylvania, and a former student of Klett’s. Klett, Regents’ Professor of Art at Arizona State University in Tempe, Arizona, worked as a geologist before pursuing photography. According to Klett, what draws him toward being an artist is that it enables him “to move into territories that would normally be seen as somewhat outside of the limits of any traditional practice, or at best at the outskirts of a discipline’s interests. Artists can often fill in the voids between disciplines and provide the glue to stick them together in unconventional ways.” Reconstructing the View reveals the combined invention of these two artists, offering provocative ways to think about the land, its history, and our role in “seeing” it. Collectively and individually Klett and Wolfe have collaborated with, been inspired by, and/or consulted with geologists, paleontologists, archeologists, botanists, ethnobotanists, writers, poets, sculptors, and historians. By bringing together their collective backgrounds, enriched by the insights of others, the artists push one another to create work that is more radical and more subversive than they might have created individually. “Collaboration is the amplification of ideas,” according to Wolfe.


JD Talasek, Director, Cultural Programs of the National Academy of Sciences

The View from Nowhere

Laura Kurgan, Close Up at a DistanceThe global positioning system (GPS) technology incorporated into the vehicles, computers, smart phones, and other devices we use every day provides a convenience that would have been almost unimaginable two decades ago. The guesswork of map reading, the frisson of coming across something unexpected, the anxiety of being lost: for people embracing the perpetual orientation offered by GPS, these are concerns of the past. But what do you lose when placing yourself within a digital landscape, handing control and information over to the devices and organizations that govern this space? How does your relationship with the real, actual world change when it becomes a collection of indicators for orienting yourself within a virtual landscape, rather than vice versa? To what extent do these mapping technologies represent the interests and politics of their origins in warfare, surveillance, and military intelligence? What ideologies do they conceal beneath a veneer of certainty and precision?

These questions animate Laura Kurgan’s work in Close Up at a Distance. In exploring the ways these technologies alter our experience of the world and the consequences of using them—often unthinkingly—to navigate our environment, Kurgan makes use of mapping technologies in a series of nine projects. Ranging across such diverse subjects as war crimes in the Balkans and spatially concentrated zones of incarceration in Brooklyn, the projects seek to discover the politics embedded in the use of these technologies and to, according to Kurgan, “use these new technologies for good ends, rather than the militaristic ones for which they were invented.” The unexplored assumptions behind this blunt dichotomy hint at some of the weaknesses in this attractively assembled, often fascinating, ultimately frustrating collection of work.

Ways of seeing

Many of the technologies that are woven deeply into the fabric of modern society were initially developed to project and enhance military power. Global positioning and remote sensing satellites, and the software algorithms that makes sense of their data, are no exception.

The Department of Defense’s GPS program, known as NAVISTAR, began in 1973 and was operational in time to direct missile strikes and troop movements during the first Gulf War in 1991. Today the system, a network of two-dozen satellites and five ground stations, is operated and maintained by the U.S. government as a “free worldwide utility.” It is capable of pinpointing a device’s position to within 5 meters, and when augmented by other sources (the iPhone, for instance, makes use of nearby cell phone towers or WiFi networks) GPS can be accurate down to less than a meter. It enables a person to know precisely where a smartphone, vehicle, missile, or drone is located on the earth’s surface.

That information, however, would be difficult to understand without a map. Remote-sensing satellites, which take pictures of the earth’s surface from orbit, have greatly facilitated the ease and accuracy of modern cartography. In the 1960s, the Central Intelligence Agency’s Corona program inaugurated the use of reconnaissance satellites for intelligence purposes, and was quickly coopted for mapping missions. Like many clandestine Cold War initiatives, Corona bordered on cartoonish in its complexity: exposed film was dropped from orbit in a “bucket” with parachutes, meant to be caught in midair over the Pacific Ocean by a passing airplane. A salt plug would dissolve and sink the bucket within a couple of days, should the plane miss and Navy ships not find the film bobbing on the surface.

The satellite image is a representation, mediated by what we know and what we believe, and like all representations of reality, it is relentlessly political.

From this outlandish origin, remote sensing has followed a trajectory similar to that of other technologies—from a purely military and intelligence technology (spy satellites), to a resource supported by public funds (Landsat), to a commercial venture (QuickBird and GeoEye, for example). After 1992, the U.S. government allowed private companies to operate remote-sensing satellites and sell very high-resolution images. GeoEye’s second-largest customer for its images is Google (the largest is the U.S. government). This is a profound change in the creation and use of maps. Representations of territory have been a way for states, from the earliest city-states to contemporary nations, to lay claim to, or at least make legible, the space being charted. Private entities like Apple and Google now decide what, how, and to whom cartographic information is presented. While noting this, Kurgan does not explore the transformations entailed in this commercialization of remote-sensing technologies.

The latest generations of GPS and sensing satellites transmit data digitally. (No more film buckets dropped from orbit, unfortunately.) These data require interpretation, so geographic information services (GIS) software is necessary for interpreting and displaying spatial information in a useful way. Connecting data to geographic location can be enormously useful in determining, for instance, a user’s position on a map, troop movements in a foreign country, or the buying habits of a targeted demographic. One of the first examples of utilizing spatial data was British physician John Snow’s famous 1854 map of a deadly cholera outbreak in London, which inaugurated both the science of epidemiology and the use of social and geographic data to visualize what would otherwise be impossible to perceive.

Kurgan undertakes a similar task in her “Million-Dollar Blocks” project, which layers criminal justice data onto neighborhood maps to find city blocks on which the state has spent more than a million dollars incarcerating its inhabitants. What her maps show is the state’s investment in destitute areas of the city—not for development purposes but to lock up the people who live there. This is a brilliant appropriation of the GIS systems that police departments use to track crimes (such as CompStat in New York City), systems that enable the allocation of resources on incidents of crime while ignoring the underlying factors of concentrated poverty and isolation that contribute to these incidents.

Politics of representation

As the “Million-Dollar Blocks” project illustrates, Kurgan’s intent with the investigations collected in Close Up at a Distance is to repurpose mapping technologies for ends that belie their origins in state surveillance and military applications. She claims that “the history and politics of these technologies are at once obscure and important for understanding what’s at stake in working with them,” and goes further to argue: “For every image, we should be able to inquire about its technology, its location data, its ownership, its legibility, and its source.”

This agenda seems unfocused. A satellite image’s metadata may be of esoteric interest to the specialist, but to the average person this information is analogous to knowing the model of camera used to create a magazine photo or the network architecture of an email program. It is unclear what moral or political clarity that information can offer. Furthermore, can the militaristic ideology that originated a technology inhere in the technology itself? As mapping systems become increasing incorporated in nonmilitary public and commercial spheres with their own sets of political commitments, it appears that the use rather than the origin of these technologies sets the political stakes.

Of greater concern then, and where Kurgan’s investigations are most valuable, are the ideological ends that these technologies’ outputs are marshaled to support. The fact that the maps, images, and spatial data produced by mapping technologies are often presented as objective and irrefutable makes a critical appreciation of these technologies essential. “The facts speak for themselves,” according to Colin Powell’s infamous formulation, made while he argued the case for war using satellite images of Iraq’s purported weapons factories before the U.N. Security Council. But the “facts” of satellite images most certainly do not speak for themselves.

Images are never merely an objective, “mechanical record” of the world. Painting, for example, which until the early twentieth century was nearly always representational, is a highly subjective medium, and obviously so. The subject matter, composition, and style are chosen by the artist, and represent his or her interpretation of the world. This is equally true of photographs, although their subjectivity becomes more difficult to discern because, as Susan Sontag noted, photographs “do not seem to be statements about the world so much as pieces of it.” Untouched by human hands, the veracity of images encoded to a magnetic disk in the guts of an orbiting satellite seems absolute. But their very distance from human affairs—what makes them seem objective—means that satellite images require analysis and interpretation to make them intelligible. The view close up at a distance, carefully selected and analyzed, is a screen on which a particular version of the world can be projected. The satellite image is a representation, mediated by what we know and what we believe, and like all representations of reality, it is relentlessly political.

Kurgan fully recognizes this, and offers critical appraisals of images produced for what she calls “militaristic ends.” But her assessment lacks the same clarity when presenting her own images, the “good ends” of the dichotomous moral framework she has set up. Her unexamined assertions are most clearly on view in the “Monochrome Landscapes” project. These photographs from QuickBird and Ikonos satellites are of four landscapes characterized by the colors white, blue, green, and yellow: the Arctic National Wildlife Refuge in Alaska, the Atlantic Ocean off the west coast of Africa, old-growth rain forest in Cameroon, and the Southern Desert in Iraq.

The resolution of these satellite images is such that one can see, for example, logging roads cut into the Cameroonian forest. Kurgan notes that the forest “has a simple aesthetic—a detailed and undulating green forest, seen from above, whose beauty is interrupted by a road that looks almost natural, simply a part of the landscape. But it is new, not natural, and demands that a viewer ask questions about it.” It is clear from Kurgan’s phrasing that this image is intended to prompt an awareness of the way the human presence diminishes—has “interrupted”—the beauty of the natural world. But hers is by no means a self-evident conclusion: the fact of that road does not speak for itself, and the idea that a road is “not natural” is normative and subjective. The verdant forest canopy may indeed be beautiful from above, but the people at ground level making use of the forest’s resources via that road cannot share in Kurgan’s aesthetic perspective.

The temptation to omit competing responses to these kinds of images may be, in part, a function of the vantage that remote-sensing technologies enables. As anyone who has ever tried to guess at the purpose of structures seen from an airplane window knows, the view from above is disorienting. It requires interpretation, and when this is not acknowledged—when images are presented as objective, unpolluted by human interests because produced by a machine in orbit—the unseen interpreter has imposed his or her own view of reality on another’s. It thus offers political opportunities. The environmentalist Stewart Brand, for instance, called for an image of Earth from space to be made public in order that viewers would recognize the photograph as “visual proof of our unity and specialness, as our luminous blue ball-of-a-home contrasted dramatically with the dead black emptiness of space,” according to Brand’s colleague Robert Horvitz.

Is this utopian vision of our fragile blue planet—a view that by necessity omits the human, the political, the contested—really so distinct from the militarized perspective of a landscape as a place to be monitored and controlled? Both use technology to depopulate the visible space for other, higher purposes, and in removing people from the frame they become totalizing visions, “visual proof ” of the rightness of a particular worldview. To quote Vilém Flusser, the Czech-born philosopher and writer, humans have made these images to orient themselves in the world, but because “they are no longer able to decode them, their lives become a function of their own images.” The view close up from a distance demands a more rigorous critique in order to avoid simply privileging one ideology over another. Kurgan offers only a partial, albeit often compelling, decoding of these images.

Archives – Spring 2014

University of Texas at Dallas professor John Pomara’s work reflects an interest in the role that human error plays in technology, focusing primarily on the current state of painting and picture making with the rise of new media and digital technology. Pomara explores and formats computer stenciling of magnified digital images. These pictorial distortions are then painted in an analog fashion, pulling industrial enamel paints across aluminum surfaces. By creating these abstract paintings of blurs, glitches, and printing imperfections, Pomara challenges the commonly held belief that modern technology is cold, rational, and without error.

Image courtesy of the artist and Barry Whistler Gallery, Dallas, Texas.

Poverty and Vulnerability to Storms

Typhoon Haiyan, which hit the Philippines on November 2, 2013, left behind more than 6,000 dead and displaced a population the size of Los Angeles. The scale of the damage is a result not only of the severity of the storm but also of the vulnerability of the millions of impoverished people living in the Philippines. Fragile food security leads to acute malnutrition with long-term effects, limited health and sanitation infrastructure results in the spread of disease to more victims, and the poorest surviving households are pushed further into debt. This vulnerability has been compounded throughout a decade that has included a succession of major typhoon disasters.

Typhoon is a regional name for a tropical cyclone, and it’s a familiar word in the Philippines, which has been the country most often struck by these disasters during the past decade and the past half-century. Other countries that have suffered catastrophic storm damage—Bangladesh, China, Myanmar, and India—are also examples of how poverty increases vulnerability.

Philippine vulnerability

In the Philippines exposure to typhoons is inescapably high. The geography makes coordinated efforts to reduce disaster vulnerability critical. But over the past half-century mortality per million people shows no appreciable decline. Vulnerability resulting from growing coastal populations living in unsafe housing combined with environmental degradation can outpace even the largest evacuation and response plans.

Source: EM-DAT and The World Bank

Recent cyclone disasters in India and Bangladesh reveal major improvements in disaster preparedness. In India, Cyclone Phailin in 2013 caused 38 deaths, a vast improvement from the 10,000 deaths resulting from a similar storm in 1999. In 2007 Cyclone Sidr in Bangladesh caused the enormous loss of 4,400 lives, but this is far fewer than the almost 140,000 lives lost in a comparable storm in 1991. But similar improvements in community resilience are yet to be made in many other regions, as was tragically apparent in Myanmar in 2008 when a storm claimed more than 138,000 lives. Disaster preparedness programs that can evacuate and shelter millions of people take years to implement.

An illustration of the socio-economic capacity needed to protect against disasters can be seen in the mortality data for the affluent storm-prone countries Japan and the United States. The only outlier is the high mortality caused by Hurricane Katrina, which affected one of the poorest regions in the United States. Storms are inevitable, but the resulting death and destruction can be limited with adequate preparation. Poor people, especially in developing countries, are particularly vulnerable to storms. They deserve better protection.

Data for the figures are from the Emergency Events Database (EMDAT), maintained by the World Health Organization Collaborating Centre for Research on the Epidemiology of Disasters (CRED). Population and gross national income (GNI) per capita data are from the World Bank world development indicators.

Deadly storms

Globally, cyclone disaster mortality has exceeded 10,000 deaths in 11 of the past 50 years. Four years in the past decade have exceeded 5,000 deaths. These figures show that cyclones are an enormous and relentless threat to human life.

Source: EM-DAT

Poverty matters

The occurrence of a cyclone remains a deadly hazard for people living in countries with low gross national income per capita such as the Philippines ($2,470), India ($1,530), and Bangladesh ($840). These storm events are not included because they far exceed the scale shown here: Bangladesh in 1965, 1970, 1985, and 1991; India in 1971 and 1977. By contrast, wealthy countries such as Japan and the United States suffer relatively low mortality rates when they are hit by powerful storms because they can afford to invest in the infrastructure and other measures needed to reduce their vulnerability.

Source: EM-DAT and The World Bank

Choosing a Future

The Second Machine AgeThe past several years have witnessed a lively debate about innovation between techno-pessimists and techno-optimists. The pessimists’ view—exemplified by work such as Peter Thiel’s What Happened to the Future, Robert Gordon’s The Demise of U.S. Economic Growth, and Tyler Cowen’s The Great Stagnation—is that the days of robust U.S. innovation and productivity growth are over, in part because most of the low-hanging fruit has already been picked. Gordon, for example, asserts that “there is no need to forecast any slowdown in the pace of future innovation for this gloomy forecast to come true, because that slowdown already occurred four decades ago.” For him, “medical research, small robots, 3-D printing, big data, and driverless vehicles” are marginal extensions of past technologies which will do little to drive future growth.

Confronting the innovation pessimists are the innovation optimists, exemplified by, among others, Peter Diamandis and Steven Kotler’s Abundance: The Future is Better Than You Think and Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. For Diamandis and Kotler, emerging technology is so powerful that “within a generation, we will be able to provide goods and services, once reserved for the wealthy few, to any and all who need them.” They don’t just mean abundance in the developed world, but in the entire world. There’s only one problem with their utopian claim: To reach their projected income levels the world would have to experience productivity growth of 25% a year for the next 25 years, up from the 3.5% average of the past 25 years.

Like Diamandis and Kotler, Bryn-jolsson and McAfee are similarly utopian, arguing that the “second machine age” (the first one was the Industrial Revolution in Great Britain) is “doing for mental power…what the steam engine and its descendants did for muscle power. They’re allowing us to blow past previous limitations and taking us into new territory.”

Clearly one (or both) of these camps must be wrong. We can’t be simultaneously facing stagnation and surge. What this points to is the difficulty in accurately describing the “innovation elephant.” Optimists see the parts that are accelerating and driving change (e.g., our smart phones) and extend them to the entire economy while extrapolating current trends forward. Pessimists see parts of the innovation system that are “stuck” (e.g., much of the personal and knowledge services economy) and assume that this not only describes the entire economy but will not change going forward. The reality is that neither view is right, because some parts of the innovation system are driving rapid change, whereas others are relatively stagnant.

Brynjolfsson and McAfee (B&M) in particular assume that virtually all parts of the innovation system are vibrant and accelerating. They assert that innovation will accelerate at an exponential rate because of three factors: continued exponential advances in computing power, pervasive digitization, and the combinatory nature of innovation. For them, these three factors are enabling transformative tools that will replace large amounts of work currently done by humans, including knowledge work, and this transformation will be on the scale of the first Industrial Revolution.

There are however, major flaws in their framework. First, it’s not clear that Moore’s law will continue to be true. Gordon Moore’s revolutionary prediction in 1965 that the number of transistors on a chip would double every 12 to 18 months (and thus would computer processing speeds) has proven prescient. Indeed, over the past 40 years, processing speeds have increased over 1 million-fold, unleashing a wave of innovation across industries. But possibly as soon as 2020, the dominant silicon-based CMOS semiconductor architecture will probably hit physical limits (particularly pertaining to heat dissipation) that threaten to compromise Moore’s law unless a leap can be made to radically new chip architectures. But B&M devote no attention to this critical issue, blithely assuming that semiconductor past is prologue.

Second, after asserting that Moore’s law will continue—not just in semiconductors, but in all areas of digital technology—they argue that we are experiencing “the digitization of just about everything.” In other words, not only are digital technologies improving exponentially, but more and more areas of the economy are becoming digital. For them this matters because “when things are digitized…they acquire some weird and wonderful properties. They’re subject to different economies, where abundance is the norm, rather than scarcity.”

Although it is true that digital technologies are reshaping traditional industries, including transportation, manufacturing, education, and health care, this does not mean that bits will replace all atoms or genes. Food won’t be digitized. Manufactured goods, although increasingly sold online and made with digitally enabled technologies, will still be made of atoms. What B&M are really referring to is digitized information, where abundance is real because digital goods are nonrivalrous, meaning I can enjoy them without that coming at your expense. This counterintuitive property, however, applies to probably less than 5% of the economy, and certainly not to activities such as making cars or waiting on tables, where scarcity and rivalry are the rule.

Third, B&M assert that innovation is speeding up because the possible combinations of innovations are increasing, as is our ability to combine the ingredients. For them, innovation is easier in the digital era because of the possibility to recombine “recipes” and test them, what they call “recombinant innovation.” They claim that the “number of potentially valuable building blocks is exploding around the world.” Growth is being held back only by our inability to process all the new ideas fast enough. In short, they argue that the “second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world.”

But this is a simplistic view of the process of innovation that likens it to random recombinations of elements, akin to having a million monkeys on typewriters, hoping that one will write a Shakespeare play. If this were true, the rate of innovation should have sped up over the past 100 years as more building blocks were created and more people were at work combining ingredients. But innovation is no faster today than it was in the late 1800s. In fact, it appears that innovation is getting much harder, because the problems to solve are so much tougher, and the only thing keeping us from suffering an innovation drought is the increased global resources going to R&D.

This leads them to perhaps their largest misreading of the future, one that is shared by many futurists speaking at corporate confabs, TED-talk pundits, and pretty much everyone who works at Silicon Valley’s Singularity University: the notion that technical progress is improving “exponentially.” If innovation is actually improving exponentially every few years, this would suggest that a decade from now, the U.S. Patent Office should be issuing 4.4 million patents a year, up from the 542,000 in 2013 (the exponential rate of increase). I can’t wait.

Finally, they overstate the extent to which digital innovation is transforming occupations. For them, virtually all jobs will be disrupted by smart machines. A closer look suggests otherwise. In a back-of-the-envelope analysis of U.S. occupations, the Information and Technology Innovation Foundation came up with a roughly 20-50-30 split among jobs that are moderately difficult to automate, difficult to automate, and very difficult to automate. In other words, only about 20% of total U.S. jobs are likely to be easily automated over the next decade or two.

Despite the utopian future that B&M suggest is waiting for us, at least in terms of innovation, they are surprisingly pessimistic about the effects, warning of all sorts of dystopian results, the principal one being massive unemployment and income inequality. They have backed off somewhat from their more extreme claims made in their ebook The Race Against the Machine, in which they claimed that the second machine age would cause massive unemployment. But they still raise the fear flag, arguing that “as computers get more powerful, companies have less need for some kinds of workers.”

But their logic is fundamentally flawed. For example, after pointing out that since the year 2000, productivity and employment are no longer growing apace, they assert that this disjuncture is evidence that productivity kills jobs. But not only has productivity not accelerated after 2000, there is simply no logical reason why growth in the labor force and growth in productivity would be related.

And although they acknowledge that technologically driven productivity would reduce prices, which should enable consumers to purchase more of other goods and services, thereby employing more workers, they dismiss this possibility. Without any evidence or logic, they claim that consumers will be satiated and not want to consume more even if their disposable incomes go up. I don’t know about MIT professors, but the average U.S. family with a household income of around $50,000 would be ecstatic if higher productivity doubled or even tripped their real incomes and would easily find things to spend it on.

The most disturbing aspect of B&M’s argument is that it might lead policymakers to conclude that their job should be to slow down innovation-driven productivity growth. B&M argue that “we can do more to invent technologies and business models that augment and amplify the unique capabilities of humans to create new sources of value, instead of automating the ones that already exist.” They advocate that government award prizes for technologies that don’t replace labor. They want to start a “made by humans” labeling movement. And since technology will destroy jobs and create a massive new lumpenproletariat sitting at home with nothing to do, they advocate for a slew of redistributionist, rather than growth, solutions, including a negative income tax, an expanded Earned Income Tax Credit, and a national mutual fund that provides dividends for everyone.

The excesses of the techno-optimists do not mean, however, that the techno-pessimists are correct in asserting that we can no longer expect much benefit from innovation. Given the slow growth in U.S. productivity over the past decade and the large expected rise in retirees over the next quarter century, the most important thing policymakers can do is support innovations that “automate the jobs that already exist.” Stoking neo-Luddite fears of technology-induced joblessness is a step in the wrong direction.

Steal This Book

Peter Suber, Open AccessIn 1971, Abbie Hoffman mischievously named his first book-length screed Steal This Book, and founded a publishing company, Pirate Press, because no existing publisher would touch it. It was a countercultural manifesto against the “pig” establishment. The networks—CBS, NBC, and ABC, plus the upstart Fox—were “evil corporate conglomerates” spewing capitalist lies.

Flash forward four decades, and the inmates are running the asylums. Establishment journals, including Issues in Science and Technology, publish their content freely and openly online, inviting readers to “steal” their articles. The old TV networks still exist, but the one that really matters is the global Internet, where free information rules. The born-digital generation regularly pirates copyrighted content or at least expects information to be free and instantaneous. And there are even political “Pirate” parties formally established in several countries in the European Union.

Outraged? Outsourced? Or just curious? Peter Suber’s book Open Access provides an easy-to-read compendium of answers to many questions and blows up some of the canards that have been flying around the ether. Suber is one of the gurus of the open access (OA) movement. His Open Access News blog was for about eight years the place to go each month to find out what was happening in OA publishing, worldwide. Unfortunately, Suber discontinued his valuable service in 2010, but this book summarizes what he learned during that time.

There are basically two approaches to OA. One is the “green” road, which depends on the author of the manuscript to deposit it in an institutional or discipline repository, or less preferably on the author’s Web site, either immediately upon publication or within some prescribed embargo period of perhaps a year. The other is the “gold” road, which is the OA publishing model itself. There are now over 9,000 such journals registered in the Lund University Directory of Open Access Journals (www.doaj.org) in Sweden, and the number is growing as even many of the subscription legacy publishers are launching fully OA or hybrid journals.

The green approach is relatively easier to do but suffers from author inertia and a lack of mandates, the general absence or existence of which is a matter of great controversy. According to the Registry of Open Access Repositories at Southampton University (roar.eprints.org) in the United Kingdom, there are now at least 2,800 formal repositories for authors to deposit their manuscripts in throughout the world, although many authors post or share their works informally online anyway.

The gold approach requires significant effort on the part of a publisher to establish. It also may cost the author a substantial sum to publish an article, despite there being many other creative ways to finance such journals, including consortia, institutional or government subsidies, volunteerism, advertising, or some combination of them. The gold publication model has been slowly but steadily gaining in market share, with perhaps one-quarter of all scholarly journals now published in some OA form.

Suber covers both of these main models in Open Access and has written the book for both the uninitiated and the unconverted. For those who are not well versed in what OA is all about, he describes the elements of the models and the varieties within them in workmanlike fashion. He addresses the scope and the policies, the economic and copyright aspects, and who are the beneficiaries and the casualties. And he concludes the book by looking briefly into the future and providing some self-help references for those willing to take the next step.

However, it is also a book for the undecided. Suber makes a strong argument for the openness of publicly funded research writings throughout the manuscript, and in particular debunks some of the myths about OA.

So, for instance, the real purpose of OA is to provide the broadest possible access and use, not to remove quality, as the antagonists assert. There is not less attention given to peer review than in more established subscription journals. The fact that many subscription journals have a higher citation index frequently is due to their having been in business for a much longer time.

Because many bona fide gold-road journals are funded by author payments, they have been accused of preying on the poor researcher or for taking any money that they can, without an adequate filter. But most subscription journals also have additional charges, and there is no monopoly on greed. Many gold road journals have reduced charges or are free to authors who successfully plead poverty. Furthermore, an author has many different publishing outlets and business models to choose from, so there is no attempt to reduce academic freedom.

OA does not lead to more plagiarism, nor is it a vehicle to relax the rules against it. In fact, the more eyes that can see the work, the less likely it is to be plagiarized successfully. Nor is it a war on copyright or a way to subvert it; it is actually the subscription journal publishers who insist on transferring copyright to them without any payment to the author, thereby capturing the product as well as providing the service.

There is no organized attempt to punish or undermine conventional publishers, to deprive authors of royalty earnings (in the case of books), to boycott certain publishers, or to destroy the whole public research system. What OA does do is shift the cost burdens of scholarly publishing, lower those costs, and take advantage of the attributes of the Internet to make publicly funded research broadly accessible and usable. In summary, Suber dispels the arguments against open publishing of publicly funded research results and makes a cogent case for the new models.

There are a few things that this book does not try to do. It is not a scholarly treatise that looks in depth at the various intricacies of OA activities. There are plenty of those books and articles already. For those interested in starting up an OA journal, it is not a tutorial about how to do that, although it provides a very good background rationale for doing so. Nor is it likely to change the minds of those firmly against OA publishing.

This last point is worth some further observations. On one side are the stakeholders of the ancien regime: the legacy publishers that reap large profits from the public purse, and the professional societies that subsidize their various other member programs with generally more modest income streams from their journals. These stakeholders, especially the commercial publishers that largely cornered the artificial academic publishing market over the past few decades, have a lot to lose from ceding this cash cow.

On the other side are mostly small, upstart publishers; groups of researchers in different disciplines, many in the besieged library community; and individual provocateurs. These groups and individuals are either leading by doing or are tirelessly analyzing and writing about why OA is the better option. Most of them are in the game for the principle, not the money. They are gradually winning the argument with the powers that be—the national legislatures, the research funders, the university administrators, and the public at large—but it is a slow process. Nevertheless, the OA advocates also have their differences and factions favoring green, gold, or some other flavor of OA. The internecine warfare can be as intense as with the legacy publishers.

In short, this can be characterized as mostly a generational and “religious” conflict with vested interests, more akin to a 30-year war than a rational discourse about publishing business models. A single book, such as Suber’s Open Access, will not change such hearts so easily, despite marshaling all the arguments to change some minds.

So, if you are one of those who is uninformed about the OA movement, or just sitting on the fence, you should “steal” this book. It is available freely and openly at bit.ly/oa-book. Or you may choose to subsidize the work and purchase a paperback copy from MIT Press for about $14.

And Now for Something Completely Different

The poet Muriel Rukeyser wrote that “The Universe is made of stories, not of atoms.” And for most people, she is right, even though the hardcore Issues reader might not be happy about it. We value the dispassionate logic of intellectual discourse, the relentless building of argument on the sturdy foundation of evidence. We discount the cheap appeal to emotion and the pathos of individual cases. We relish the opportunity to remind our readers and listeners that data is not the plural of anecdote. And if the anecdote is fictional, I mean, is it even worth considering?

Well, yes, unless we think the world is run by the characters on “The Big Bang.” Thankfully, we have not entrusted our well-being to the control of eggheads or technocrats. The wisdom of democracy is to recognize that we cannot put all our trust in bloodlines or academic credentials. What we now call the wisdom of crowds—or the advantage of index mutual funds—is the recognition that elites and specialists are not infallible, that the world is too complex for any individual or small group to comprehend completely. There is some collective wisdom that comes from diversity of experience and ways of understanding that experience.

The most powerful and persistent way of making sense of the human experience is fiction. Whereas the vast majority of scientific literature eventually loses its value as later research reveals its errors and limitations, we are still reading and learning from Homer and Virgil, Chaucer and Shakespeare, Austen and Woolf. This is not to say that all knowledge and wisdom can be found in fiction, but it is a recognition that we would be foolish to ignore the insights of the imagination and the perennial wisdom that can be revealed by focusing on individual characters in specific circumstances.

A significant amount of recent and contemporary fiction gives considerable attention to science and technology. This includes a large group of fine writers who are identified primarily as science fiction writers: H. G. Wells, Jules Verne, Arthur C. Clarke, Isaac Asimov, Ray Bradbury, Ursula K. LeGuin, Philip K. Dick, William Gibson, Neal Stephenson. Though often dismissed as a homogenous and undistinguished subgenre, these writers deal with a wide diversity of subjects ranging across the physical and social sciences and personas that stretch from technological Pollyannas to dystopian Cassandras. Another large group of celebrated authors—Aldous Huxley, George Orwell, Anthony Burgess, Margaret Atwood, Thomas Pynchon, Nobel laureate Doris Lessing, and many others—primarily for work not related to science have written works that can be classified as science fiction.

All of these writers have used their fiction to explore questions that are relevant to policymakers, and all have reached broader audiences and touched them more deeply than the analytic articles we publish in Issues. As editors we asked ourselves: Why should we let other publications have all the fun? More important, why should we ignore the ways that these writers influence our collective attitudes toward science and technology?

The obvious answer is that we shouldn’t, and we won’t. This edition includes the first of what we hope will be a long lineage of stimulating science fiction stories. We are particularly pleased that our first venture into fiction was written by Gregory Benford, who is not only a respected science fiction writer but has also taught astronomy and physics at the University of California. Read it now.

Anticipating a Luddite Revival

Even as computer-based consumer products have transformed our leisure and social lives over the past decade, information technology (IT) and robotics suggest a transformation of work that might be even more far-reaching. Some observers, including many workers, see this vision as inherently threatening.

Economists, however, have repeatedly argued that technological advance is central to economic growth and that workers displaced by technology in one sector will be absorbed in another. Of course, this process of adjustment takes time, and the economic arguments about long-term adjustment can seem particularly hollow during prolonged recessions. During these periods of lower economic activity, such as the slowdown the United States has recently been experiencing, displaced workers might find it difficult to move into new positions. Still, economics has provided a compelling model of the adjustments of the labor market to technological change, and the historical record has repeatedly demonstrated that the fears about substantial portions of the workforce being permanently displaced from work are unjustified.

But are economists and history right today? The nature of recent technological change suggests that the adjustments that were possible in the past might not continue to take place. Over the past few years, a new appreciation has emerged of the wide range of computer capabilities that are becoming available. In turn, these new capabilities suggest a broad range of occupations that could begin to see workforce displacement resulting from the applications of IT and robotics, including occupations in fields involving high levels of pay and expertise, such as medicine and law.

To help gain a better understanding of how such displacement might play out, I recently investigated the range of IT and robotics capabilities that could conceivably affect the workforce over the next few decades. The results to date are only suggestive, but they point the way toward more serious work that needs to be done in the coming years to understand the growing implications of IT and robotics for the workforce.

The exploratory study was motivated by two simple arguments about the possibility of understanding the implications of future technological change for the workplace.

The first argument is that the match between the new computer capabilities that are ready to be applied in the workplace and the capabilities currently being used by workers in different occupations is likely to be a useful guide to the occupations that will be most affected by new technology. So, for example, if computers have capabilities in speech recognition and simple reasoning, it is reasonable to expect that those capabilities can be combined to carry out some of the tasks of telephone operators and receptionists, as has been the case over the past few decades.

Of course, the technique of looking at the match between computer capabilities and occupational skill requirements is hardly foolproof. For one thing, we may overlook important skill requirements for some occupations, such as the substantial range of common-sense knowledge that enables a receptionist to reply sensibly when a customer makes an entirely unexpected request. For another, we may overlook the opportunities for reengineering a task to mechanize it in a way that uses different capabilities than those used by people, such as when the cotton gin replaced the detailed finger movements used by people to remove seeds from cotton fibers. Despite these challenges, however, the match between computer capabilities and occupational skill requirements provides a reasonable starting point for considering what jobs might be affected.

The second argument is that we can see new IT and robotics capabilities demonstrated in the research literature long before they are broadly applied in the workplace. Research has shown that such diffusion lags can often be several decades. Even in the fast-paced area of IT, where technology is being rapidly developed and applied, a straightforward application such as electronic invoicing can require decades to be fully adopted. The reason, of course, is that the adoption of new techniques usually requires substantial investment, as well as learning and adjustment by many people who are accustomed to using an existing system. In addition, many times the research techniques need to be refined before they can be applied commercially, or they might need to become cheaper or faster before their application is practical. Thus, although it is possible to use the research literature on computer capabilities to look several decades into the future of IT and robotics applications, it is not easy to predict when a new capability will be widely used.

Even with these caveats, it is worth considering how these emerging capabilities compare with work skills in different occupations and how they might affect work.

Assessing current skill sets

To gauge the current capabilities of research systems in IT and robotics, my investigatory study sampled articles from two journals, AI Magazine and IEEE Robotics & Automation Magazine, over a period of 10 years, from 2003 to 2012. Both journals publish articles that reflect a wide range of specialized research related to the capabilities of IT and robotics, and the articles are sufficiently technical to describe capabilities in some detail without being so technical as to be difficult for a nonspecialist to understand. Collectively, the articles presumably describe the IT and robotics capabilities that are currently seen as noteworthy; that is, they describe capabilities that are just now becoming feasible for IT and robotics systems to carry out with enough success that there are promising results to report, but sufficiently novel to be interesting to report. To guard against excessive techno-optimism, the study specifically looked for limits on the capabilities of the systems. Often these limits are described in terms of constraints on the range of topics covered or the complexity of the context in which the tasks are carried out.

In sum, the set of capabilities observed can be considered to define the rough limits of what has been demonstrated in the research literature, and that can form the basis for practical applications over the next few decades.

To bring some order to this mass of information, the study separated the capabilities into four general areas: language, reasoning, vision, and movement. Each of these areas can be compared to human capabilities, and each includes a collection of different but related capabilities that together provide the full range of competences that people typically have. Although the review focused separately on the four different areas, there was substantial overlap in the systems identified, because many of the systems integrate capabilities from several of the four general areas of capability.

Language capabilities. Fifteen articles described systems demonstrating language capabilities, which ranged across four specific aspects of language: understanding speech, speaking, reading, and writing. The systems involved a diverse range of tasks.

In the articles from the first five years (2003 to 2007), the tasks included detecting problematic text in an insurance application, providing customer service for sales and repairs, explaining the answers to chemistry questions in an advanced high-school test, describing the movement of cars in a video of a traffic intersection, translating car assembly instructions, asking for help in finding the registration booth at a conference, giving a conference talk that included questions from the audience, and role-playing with students in a training simulation about how a military officer should handle a car accident with a civilian.

In the articles from the later five years (2008 to 2012), the tasks included screening medical articles for inclusion in a systematic research review, solving crossword puzzles with Web searches, answering Jeopardy questions with trick language cues across a large range of topics, answering questions from museum visitors, talking with people about directions and the weather, answering written questions with Web searches, following speech commands to locate and retrieve drinks and laundry in a room, and using Web site searches to find information to carry out a novel task.

The length and complexity of the language that these systems could handle often corresponded roughly to a few pages of written material. Of course, text length is only a crude way of gauging language complexity—a short poem or technical argument can be quite difficult to understand—but it does provide some sense of the language capabilities of the systems. They have advanced beyond the challenges of typical language use at the word or sentence level, but they fall far short of typical extended language use at the lecture or book level.

One important aspect of language involves adjusting to the needs of the person who is being communicated with and the requirements of the situation. Several of the example systems exhibited this kind of sensitivity, including the ability of the conference-talk system to monitor its points and not repeat them, and the ability of the training-simulation system to reason about emotion in order to choose appropriate language and understand imprecise language.

Considered over time, the articles showed some progression in the range of topics addressed by the different systems. In the first half of the period, all of the systems focused on language use within a single topic area. In contrast, a number of systems described in the later articles attempted to deal with an unlimited range of topics by tapping into a range of source material available on the Web.

Reasoning capabilities. Twenty-one articles described systems demonstrating reasoning capabilities. The systems showed a number of different aspects of reasoning, including recognizing that a problem exists, applying general rules to solve a problem, and developing new rules or conclusions.

In the articles from the first five years, the tasks addressed included making underwriting decisions about long-term care insurance, providing customer service for sales and repairs, developing new hypotheses about good conditions for growing crystals and for recovering from medical disability, helping diagnose appliance problems, providing useful analogies for solving problems in physics and in military tactical games, providing answers and explanations to chemistry questions in an advanced high-school test, developing novel atomic models for electron-density maps of proteins, identifying patterns of potentially suspicious facts that could indicate a terrorist plan, resolving problems related to scheduling and project coordination, role-playing with students in a training simulation about how a military officer should handle a car accident with a civilian, and driving a vehicle on different types of road.

In the later articles, the tasks included screening medical articles for inclusion in a systematic research review, processing government forms related to immigration and marriage, solving crossword puzzles, playing Jeopardy, answering questions from museum visitors, analyzing geological landform data to determine age, talking with people about directions and the weather, answering questions with Web searches, driving a vehicle in traffic and on roads with unexpected obstacles, solving problems with directions that contain missing or erroneous information, and using Web sites to find information for carrying out novel tasks.

One of the striking aspects of the reasoning systems was their ability to produce high levels of performance. For example, the systems were able to make insurance underwriting decisions about easy cases and provide guidance to underwriters about more difficult ones, produce novel hypotheses about growing crystals that were sufficiently promising to merit further investigation, substantially improved the ability of call center representatives to diagnose appliance problems, achieved scores on a chemistry exam comparable to the mean score of advanced high-school students, produced initial atomic models for proteins that substantially reduced the time needed for experts to develop refined models, substituted for medical researchers in screening articles for inclusion in a systematic research review, solved crossword puzzles at an expert level, played Jeopardy at an expert level, and analyzed geological landform data at an expert level.

However, common-sense reasoning has historically been more difficult for IT systems to demonstrate. The articles from the first five years were consistent with the historical contrast, showing high levels of reasoning within narrow areas of specialized expertise but no evidence of the broad and more flexible reasoning that is typical of human common sense. But during the later period, there were examples of systems that used information from the Web to reason across a broad range of areas.

TABLE 1

Vision capabilities. Twenty-two articles described systems demonstrating vision capabilities. These include systems that recognized objects and different features of those objects, including their position in space.

In the articles from the early years, the tasks of the systems included locating a soccer ball and other soccer players, identifying cars and their movements in a video of a traffic intersection, finding the registration booth and several rooms at a conference, identifying drivable surfaces and obstacles for an autonomous car, determining the location of a ping-pong ball, guiding autonomous vehicles to move shipping containers, identifying people and obstacles in a crowded museum, locating pallets in a factory, recognizing objects in cluttered environments, guiding a robot to grasp irregularly shaped objects such as lettuce, and identifying vehicles on a road to provide driver assistance.

In the later articles, the tasks included recognizing chess pieces by location, rapidly identifying types of fish, recognizing the presence of nearby people, identifying the movements of other vehicles for an autonomous car, locating and grasping objects in a cluttered environment, moving around a cluttered environment without collisions, learning to play ball-and-cup, playing a game that involved building towers of blocks, navigating public streets and avoiding obstacles to collect trash, identifying people and locating drinks and laundry in an apartment, and using Web sites to find visual information for carrying out novel tasks such as making pancakes from a package mix.

All of the systems involved identifying various—and diverse—objects, and they all also involved recognizing features of the identified objects, particularly their location and movement.

Movement capabilities. Seventeen articles described systems demonstrating movement capabilities. These included systems that involved spatial orientation, coordination, movement control, and body equilibrium. Many of the systems integrated movement capabilities with capabilities in one or more of the other three general areas of capability.

In the early articles, the tasks of the systems included walking, kicking a ball, passing a ball between two robots, moving down a hallway, following a map to locate a meeting room in a hotel, using an elevator, driving a car in the desert, playing ping-pong, autonomously moving shipping containers, navigating around people and pursuing objects in a crowded museum, moving pallets autonomously in a factory, and grasping irregularly shaped objects such as lettuce.

In the later articles, the tasks included moving chess pieces, driving a car in traffic, grasping objects in a cluttered environment, moving around a cluttered environment without collisions, learning to play ball-and-cup, playing a game that involved building towers of blocks, navigating public streets and avoiding obstacles to collect trash, retrieving and delivering drinks and laundry in an apartment, and using the Web to figure out how to make pancakes from a package mix.

Comparing capabilities and work skills

With these examples of IT and robotics capabilities, we can then look at the skills required in different occupations to see how they compare. To make this comparison, the study used the U.S. Department of Labor’s O*NET system, which provides ratings for hundreds of occupations on many different features. The feature set includes ratings for a number of ability scales that are related to the four general areas of capability discussed above.

O*NET uses a seven-point scale to rate the level of requirement for each ability for each occupation, with anchoring tasks for the ratings provided for levels 2, 4, and 6. The study used these anchoring tasks to provide concrete descriptions of the different levels of capability required for different jobs throughout the economy.

To focus on the big picture, the study grouped together all of the different abilities into two cluster ratings: one focused on language and reasoning, and the other focused on vision and movement. For each occupation, the highest rating across the different abilities was used as the rating for each of the two clusters.

TABLE 2

TABLE 3

Table 1 shows the distribution of employment in the economy for the different capability combinations, using the O*NET rating scales. (The table omits level 7 on the rating scale because there are so few jobs that require that level of skill.) The table makes clear that the vast majority of current jobs—roughly 81%—can be carried out with a combination of abilities at the O*NET level of 4.

So a crucial question for assessing the likely impact of IT and robotics capabilities on work over the next few decades is how those capabilities compare with this middle level of ability on the rating scale for workers. Table 2 contrasts some sample IT and robotics tasks drawn from the study with some anchoring tasks from the O*NET rating scales across the four general areas of ability.

Comparing the sample tasks for the IT and robotics research systems with the anchoring tasks for the different O*NET levels shows that the IT and robotics systems are solidly in the middle range of ability levels across all four general areas of ability. In each case, there are clear ways that the IT and robotics capabilities fall short of the higher levels of human performance, but the capabilities that are typical of level 4 on the O*NET scales appear to be roughly comparable to the types of tasks now being described in the research literature.

To help think about the relation between the various research tasks and the actual workplace, Table 3 shows the average O*NET levels for broad occupational groups, along with their portion of total employment. At the top of Table 3, sales occupations represent 11% of employment and involve a medium level of language and reasoning skills, but generally low levels of vision and movement. The nation has already seen some replacement of sales jobs with technology in the extensive use of the Web for retail, along with the use of self checkout in stores.

Currently, the level of interaction provided by such sales-related technology is low, but the research systems show capabilities that would allow more helpful interactions. Some of the research systems specifically provided interactions related to customer service, as well as related tasks such as answering questions from museum visitors or giving people directions. Some of the reasoning systems provided underlying analytic capabilities that could extend the kinds of transactions that can be carried out without a person, including processing government forms, using Web sites to find information, making insurance underwriting decisions, or diagnosing appliance problems.

It is possible to imagine how the ease and range of interaction and the depth of analysis of sales-related computer systems can be steadily extended over time to add many functions of current sales occupations. Future systems will be able to use regular language with customers to understand what they are looking for and to suggest possible solutions.

The middle section of Table 3 includes the large number of occupational groups involving both a medium level of language and reasoning skills and a medium level of vision and movement. The language and reasoning skills for many of these jobs are similar to those for the sales occupations just discussed. It is easy to imagine, for example, that the interaction and analysis that will make it possible to extend the capabilities of sales-related computer systems will also be applicable to the capabilities of administrative systems, where there is often an interaction with an internal customer.

As a contrast, it is useful to consider one of the occupational groups in the middle section that involves an extensive role for vision and movement. Physical movement is important for jobs in construction, maintenance, and production, as well as for jobs in food and personal service. These two large occupational groups represent 30% of current employment.

The use of automatic machines for performing physical movements has been key to the substantial improvements in manufacturing productivity over the past century. The research systems in vision and movement suggest how the high levels of performance that have been demonstrated in factories will begin to be extended into the more complicated settings where construction, maintenance, and food and personal service tasks are carried out. Some of the example systems directly involve maintenance or food service tasks, such as the system that moves around public streets to collect trash, the system that delivers drinks to people in an apartment, the system that can grasp irregular objects such as lettuce, and the system that can identify different types of fish.

One can imagine how the automatic capabilities that have been applied in factories can be extended into less controlled work settings over time. The Roomba (a robotic vacuum cleaner) could evolve into more extensive cleaning capabilities, and robots could be deployed for food preparation outside of factories. This will be a continuation of the automation that has taken place in factories, but it will be taking place in increasingly less controlled workplaces as robots become more flexible.

At the bottom of Table 3, there are three occupational groups involving a high level of language and reasoning skills and a medium level of vision movement. The jobs in these groups are ones that often involve higher levels of language and reasoning skills that are likely to be beyond the capabilities of the research systems.

Implications for work and the economy

This exploratory study suggests that there is the technological potential for a massive transformation in the labor market over the next few decades. It will clearly take time for the capabilities that have been roughly demonstrated in the research literature to be refined and broadly applied to the many different types of work. However, even a diffusion period of several decades is relatively short for adapting to a change of such magnitude.

In principle, there is no problem with imagining a transformation in the labor market that substitutes technology for workers for 80% of current jobs and then expands employment in the remaining 20% to absorb the entire labor force. Considering the contrasts across the occupational groups in Table 3, such a change might involve a drastic reduction in sales, management, administration, construction, maintenance, and food service work accompanied by a massive expansion in health care, education, science, engineering, and law.

The United States experienced a labor market transformation of this scale between the early 19th and the late 20th century, when the portion of the workforce employed in agriculture shifted from roughly 80% to just a few percent. However, in the shift out of agriculture, the transformation took place over a century and a half, not several decades.

In addition to the speed of the change, there are two other challenges.

The first challenge relates to the feasibility of preparing the entire labor force to move into jobs that require capabilities at the higher O*NET levels. The level 6 anchoring tasks in Table 2 are not only difficult for IT and robotics systems to carry out, but they are also difficult for many people to carry out. We do not know how successful the nation can be in trying to prepare everyone in the labor force for jobs that require these higher skill levels. It is hard to imagine, for example, that most of the labor force will move into jobs in health care, education, science, engineering, and law.

The second challenge relates to the further improvement of the capabilities of IT and robotics. Even during the limited period covered by the exploratory study, there was some indication of the advancement of capabilities. And over an additional period of several decades of R&D, the capabilities required for the level 6 anchoring tasks might well be within reach.

These challenges—the speed of the transformation, the difficulty of level 6 tasks for many people, and the continued development of IT and robotics capabilities—suggest that the economic adjustment to the application of IT and robotics capabilities over the next several decades is likely to be quite difficult. Although economists are right in principle that displaced workers should be able to move into new positions—as long as there is substantial labor demand for tasks that only people can perform—it seems unlikely that the structure of the labor market can change as quickly as the technology is advancing.

Even if alternative jobs are available, how will the displaced workers acquire the necessary skills for the new tasks? At some point it will be too difficult for large numbers of displaced workers to move into jobs requiring capabilities that are difficult for most of them to carry out even if they have the time and resources for retraining. When that time comes, the nation will be forced to reconsider the role that paid employment plays in distributing economic goods and services and in providing a meaningful focus for many people’s daily lives.

The preliminary review presented here suggests that society must begin to be much more serious about understanding the potential for IT and robotics to cause disruptive changes to the labor market over the next few decades. The scale and speed of this potential change are too great to be able to depend on ordinary economic adjustment to smooth out disruptions in the labor market.

Over the next decade or two, it is essential for researchers and national policymakers to understand the growing capabilities of IT and robotics and their implications for the workforce and the economy. The anecdotal articles that regularly appear about new technologies are not sufficient to provide the basis for understanding the full range of capabilities being developed and how they will affect employment.

To advance this understanding, the basic approach discussed here—comparing the full range of IT and robotics capabilities with the full range of capabilities used in different occupations—should be carried out more systematically and in more detail. Such systematic reviews need to be carried out once or twice each decade to make it possible to track the development of the capabilities and anticipate the full range of their consequences.

Society has the tools to think systematically about the capabilities that are now being demonstrated by IT and robotics systems and how those compare to the capabilities that are used in the workforce. The nation needs to begin to carry out the analyses that these tools allow to better understand the potential for IT and robotics to transform jobs in the years ahead.

Forum – Spring 2014

Wet drones

Bruce Berkowitz’s “Seapower in the Robotic Age” (Issues, Winter 2014) is a timely piece. He makes the astute observation that the revolution in unmanned aerial systems is but the first in a wave of robots that will likely appear next in the maritime domain. He provides an historical perspective of past innovations at sea, a surprising number of which involved semi-automatic systems, beginning with the century-old torpedo. He identifies possible applications of robots at sea and discusses a range of pitfalls and problems.

But Berkowitz misses or underemphasizes two key issues that may complicate the deployment of robots at sea: growing cyber insecurity and the legal uncertainty of robot self-defense.

The recent use of unmanned aerial systems, which began to unfold in the early 2000s, was deployed against fairly unsophisticated enemies. Nevertheless, at least a handful of expensive drones have been lost to “hacking,” compelled to defect, so to speak, to foreign air space. How much more will maritime systems be subject to cyber attack? I suspect significantly more.

Maritime systems move more slowly and often “loiter.” They can be intercepted by both manned and unmanned sea or air systems. Additionally, unlike air systems, which are difficult to capture because hacking the system typically risks losing the air frame and cargo to the powers of gravity and crash landings, a floating, unmanned maritime system is relatively easy to capture. It could be argued that an expensive, state-of-the-art unmanned maritime system would pose an especially appealing target to rival powers. Additionally, it might prove unnecessary to hack the system. Rather, any means that disables the propulsion system could result in capture by other drones, high-speed manned vessels, or by low-tech means such as nets and a couple of strong fishermen.

What would follow in the wake of human capture of our unmanned systems? I submit it will get complicated by a host of issues that Berkowitz, to be fair, did not have the time or space to address. For example, would unmanned maritime machines have the right to fight in self-defense to avoid capture by other machines or by human captors? If not, would it be necessary for human combatants to remain nearby in order to defend otherwise vulnerable maritime drones? To those who believe such a conundrum unlikely, remember that in Fall 2012 the Iranians made several attempts to intercept, and by some reports fired at, unmanned U.S. drones operating in the Persian Gulf. The attacks were met not with armed drones, but by the use of manned aircraft to accompany the unmanned systems. But to this day, the rules governing drone self-defense are unclear.

Berkowitz is certainly correct in his broad predictions: A revolution at sea is coming, and it will involve unmanned systems. But with this wave of machines will come confusion, ambiguity, legal wrangling—a proverbial storm. No doubt a fog of uncertainty will accompany the issue of robot self-defense, providing proof that at least some of Clausewitz’s 19th century observations are indeed timeless, even in the robotic age.

MARK HAGEROTT

Distinguished Professor of Cyber Security

U.S. Naval Academy

Annapolis, MD


Bruce Berkowitz offers a reasonably comprehensive and balanced assessment of the current state of play regarding unmanned maritime systems (UMS). However, one of his statements—that the Navy cannot, as a matter of DoD policy, deploy UMS that automatically identify and destroy vessels that meet their criteria for being hostile—needs some additional explanation. Modern mines are a form of lethal UMS. They await a predetermined signature along with other criteria, which, if met, causes them to explode. Some mines, such as the old encapsulated torpedo (CAPTOR) would release a MK-46 homing torpedo if a contact set it off. Other mines, such as the U.S. Navy’s submarine launched mobile mine (SLMM), can be launched at a distance and navigate to where it will lie in wait. In each of these cases there is movement involved, in the first instance to kill and in the second to arrive at the ambush position. The only movement not involved is movement to search. The SLMM, as well as stationary mines, are available for Navy use, so the restrictions in the DoD policy document are rather narrow and technical. Even these might not be viable much longer as warheads become ever more discriminating. Unmanned systems have so much promise for maintaining U.S. dominance in the undersea environment that progress will occur. Berkowitz is right: Unmanned systems will reduce risk to sailors and will allow the Navy to maintain certain types of presence with a smaller fleet.

One class of system that Berkowitz does not mention is the amphibian, a vehicle that is let loose in the water but crawls up on land. There exists any number of potential uses for this type of system in a complex littoral, especially one featuring offshore islands. Berkowitz also gives short shrift to unmanned aircraft launched from underwater. He shouldn’t, because this concept has considerable potential, especially for small flyers. One can imagine encapsulated UAVs lying on the sea floor waiting to receive a signal to cut loose of their ballast and float to the surface, releasing a UAV that performs a search pattern and broadcasts its findings, or perhaps radiates a deceptive signal to confuse an enemy. It might also serve as a communications relay, retransmitting low-power transmissions between a submarine and forces over the horizon. Again, the number of potential uses for this kind of undersea/airborne robotic vehicle marriage is almost unlimited, especially if we think in terms of air vehicles that can be folded into a torpedo-sized canister.

Reduced fleet size and evolving threats virtually guarantee that the Navy will invest in all manner of robotic systems. Just as mechanization and automation allow a single Midwest farmer to tend over a thousand acres with perhaps one part-time assistant, the advent of robotics will permit fewer sailors on fewer ships to conduct missions that formerly required many more.

ROBERTC. (BARNEY) RUBEL

Dean, Center for Naval Warfare Studies

Naval War College

Newport, RI


Useful models

Andrea Saltelli and Silvio Funtowicz (“When All Models are Wrong,” Issues, Winter 2014) provide a checklist to aid in responsible development and use of models. I agree with most, but not all, of their comments and suggestions. Their discussion deals with models in all fields, many of which are empirical, and many deal with the capricious nature of human actions. However, some models have a strong foundation anchored in the physical laws of nature. The best example, perhaps, is the models used for numerical weather prediction (NWP), which do not follow some of the rules proposed on the checklist. Predicting weather entails dealing with odds, owing to the innate lack of predictability associated with what mathematicians now call “chaos,” the tremendous sensitivity of results to small perturbations whose importance grows over time, eventually rendering a weather forecast useless after 10 to 14 days. But weather prediction has advanced enormously by using complex numerical models, built around the physical laws expressed as equations (Newton’s laws of motion, conservation of mass, conservation of energy and the thermodynamic equation, equation of state). The reliability associated with 3-day forecasts in the 1970s is now possible for 6-day forecasts. The model complexity continues to grow, and the world’s largest supercomputers are used to carry out the computations. Saltelli and Funtowicz’s recommendation that stakeholders be able to replicate the results is absurd in this case. The validity of the models is constantly tested as the weather develops, and the feedback is used to refine the models.

This is not so much the case for climate models. These models are used for future projections that cannot be verified for decades, and so they are policy instruments. They are built on the NWP atmospheric models but with the inclusion of other parts of the climate system, such as the land surface, oceans, and ice masses. Many aspects of the climate problem hinge on how well many of the interactions are represented, and in this case the physical laws are not known or are very complex. Processes not explicitly represented by the basic dynamical and thermodynamic variables in the equations on the grid of the model need to be included by parameterizations. These include processes on smaller scales than the grid such as convection and boundary layer friction and turbulence, processes that contribute to internal heating such as radiative transfer and precipitation, both of which require cloud prediction, and missing processes such as land surface, carbon cycle, chemistry, and aerosol processes. While our knowledge of certain factors increases, so does our understanding of factors we previously did not account for or even recognize, and hence uncertainty is apt to increase.

The current practice with climate models is to continue to build them to include as much complexity as possible in order to replicate the real world. The process of model development never ends. In general each new generation of such models does show improvements. Older versions of the models, which, it can be argued, are better evaluated in the literature and somewhat understood, are cast aside for the latest and greatest. However, it can be argued that predictions or projections that correspond to a given “what-if ” emissions scenario should be based on a known model whose results are reproducible. Yet new versions of climate models are created, and runs made with them are immediately made available to the community for use in Intergovernmental Panel on Climate Change (IPCC) reports without adequate testing or evaluation. Although some IPCC models deliberately have modest evolutions, some are “bleeding edge” models that are not yet tried and tested. The practice violates many of principles outlined by Saltelli and Funtowicz. The question is whether the balance is right between building the next generation model and exploiting the known model.

Transparency is a desirable goal but one that is easily undermined. Another difficulty not discussed by Saltelli and Funtowicz is that in climate science there are vested interests and deniers of climate change whose goal it seems is to undermine the science and projections using any means possible. Many of the denier arguments have been proven wrong time and again, but they keep reappearing.

Models are useful for many purposes, but they can easily be abused and should not be used as black boxes without full understanding of the approximations, assumptions, limitations and strengths. Models are tools and can be extremely valuable if used appropriately.

KEVIN E TRENBERTH

Distinguished senior scientist

Climate Analysis Section

National Center for Atmospheric Research

Boulder, CO


As a researcher in uncertainty quantification for environmental models, I heartily agree with Saltelli and Funtowicz that we should be accountable, transparent, and critical of our own results and those of others. Open access journals, particularly those accepting technical topics (e.g. Geoscientific Model Development) and replications (e.g. PLOS One), would seem key, as would routine archiving of preprints (e.g. arXiv.org) and (ideally non-proprietary) code and datasets (e.g. FigShare.com). Academic promotions and funding directly or indirectly penalize these activities, even though they would improve the robustness of scientific findings.

However, I found parts of the article somewhat uncritical themselves. The statement “the number of retractions of published scientific work continues to rise” is not particularly meaningful. Even the fraction of retraction notices is difficult to interpret, because an increase could be due to changes in time lag (retraction of older papers), detection (greater scrutiny through efforts such as RetractionWatch.com), or relevance (obsolete papers not retracted). It is not currently possible to reliably compare retraction notices across disciplines. But in a study by Daniele Fanelli of scientific bias, measured by fraction of null results, geosciences and environment/ecology were ranked second only to space science in their objectivity. It is not clear that we can assert there are “increasing problems with the reliability of scientific knowledge.”

There was also little acknowledgement of existing research, such as the climate projections used in UK adaptation, on the question of which of the uncertainties has the largest impact on the result. Much of this research goes beyond sensitivity analysis, which is part of the audit proposed by the authors, because it explores not only uncertain parameters but also inadequately represented processes. Without an attempt to quantify structural uncertainty, a modeller implicitly makes the assumption that errors could be tuned away. While this is, unfortunately, common in the literature, the community is making strides in estimating structural uncertainties for climate models.

The authors make strong statements about the political motivation of scientists. Does a partial assessment of uncertainty really indicate nefarious aims? Or might scientists be limited by resources (computing, staff, or project time) or, admittedly less satisfactorily, statistical expertise or imagination (the infamous “unknown unknowns”)? In my experience modellers may be resistant enough to detuning models and broadening uncertainty ranges without added accusations about their motivation. It would be better to simply argue for the benefits of uncertainty quantification. By showing that sensitivity analysis helps us understand complex models and highlight where effort should be concentrated, we can be motivated by better model development. And by showing where we have been “surprised” by too-small uncertainty ranges in the past, we can be motivated by the greater longevity of our results.

TAMSIN. L. EDWARDS

Research associate

School of Geographical Sciences

University of Bristol

Bristol, UK


Green skies

In “Greenhouse Gas Emissions from International Transport” (Issues, Winter 2013), Parth Vaishnav addresses a concern we share, climate change. Before commenting on “market-based measures” to reduce greenhouse gas emissions, I want to offer some context about the aviation industry, Boeing, and the environment.

Aviation is an essential part of modern life, with about 3 billion people boarding commercial airplanes every year. Even with increasingly sophisticated digital technologies and social networks, airplanes retain a unique ability to bring people together. Commercial air travel also helps foster economic development and trade. Our industry generates about 5% of global GDP and supports an estimated 56.6 million jobs, including about 170,000 at Boeing.

And as an industry, we understand that environmental responsibility plays a crucial role in our long-term license to grow.

Since the late 1950s, Boeing has improved the fuel efficiency of our airplanes by 70%, which is essential to our customers, not only for environmental impact, but also because of the rising costs of fuel. On a per-passenger-mile basis airplanes today are more efficient than cars and many other forms of transportation. Today, commercial air travel produces about 2% of global manmade CO2 emissions, which is projected to increase to 3% by 2030. This is why Boeing and our industry continue to take action to reduce emissions and improve efficiency.

The aviation industry was the first sector to set ambitious targets for CO2 emissions reduction, including industry-wide carbon-neutral growth beginning in 2020 and a 50% reduction in net CO2 emissions in 2050 compared to 2005 levels.

Boeing R&D investments focus on innovations in propulsion, lightweight materials, and avionics that improve the environmental performance of our products. These innovations are reasons why our 787 Dreamliner is 20% more fuel efficient—and produces 20% less CO2 emissions—than the airplane it replaces. In addition, we work aggressively with global partners to commercialize sustainable aviation biofuel, and we engage research institutions around the world to improve the efficiency of flight. We also advocate for modernized air traffic management systems that would cut carbon emissions for all airplanes flying by an estimated 12%.

It’s also important to note that our industry has agreed that global market-based measures may play a role to bridge a short-term emissions gap before these new technologies reach their potential. We believe that any money generated from these measures should be put to use to find innovative ways to continue to reduce emissions.

Innovation and new technology will always be at the heart of aerospace. At Boeing, we are actively testing lower-emission aircraft, including a blended wing-body design and hydrogen-powered propulsion. We are also working with the National Aeronautics and Space Administration to explore hybrid, solar, and electric-powered airplanes to create cleaner modes of flight in the decades to come.

Our industry is building on its demonstrated progress and holding ourselves accountable to support continued global economic growth and create a more sustainable future.

JOHN TRACY

Chief Technology Officer

The Boeing Company

Seattle, WA


Farmer suicides

Keith Kloor’s article (“The GMO-Suicide Myth,” Issues, Winter 2014) does a disservice to its scientific audience, and I take issue with it. Not with its thesis that Bt cotton is not directly causing Indian farmer suicides: that is obvious, and could be shown simply by noting that the biggest spike in farmer suicides occurred in Andhra Pradesh in 1998, four years before Bt cotton was even on the market. (The 1998 suicides were publicized in the Wall Street Journal and other newspapers, and received international attention; how short our memory is.) Or one could simply summarize the 2011 Gruère and Sengupta article in Journal of Development Studies showing that suicides have not climbed as Bt cotton has been almost universally adopted in India.

What I take issue with is the use of human tragedy in rural India simply to land a few blows in the relentless genetically modified organism (GMO) brawl. As I have pointed out before, both sides in the brawl claim the suicide epidemic bolsters their case, and neither shows concern for actually understanding what is behind it. Despite a headline invoking the “the real reasons why Indian farmers take their own lives,” this article mentions almost none of the serious scholarship on the topic, omitting even A. R. Vasavi’s insightful and widely-read Shadow Spaces: Suicides and the Predicament of Rural India.

Farmer suicide is a complex problem that can hardly be blamed on a bank policy change that Kloor heard about at a conference. Most small farmers don’t even borrow from banks, and anyway this begs the question of why cotton farmers’ need for credit has risen. State-encouraged, pesticide-intensive hybrid cotton spread during the 1990s, contributing to intractable problems in ecology, farm economics, and farmer decisionmaking. There were social effects as well, as risk and debt became increasingly individualized, unmooring farmers from sources of support.

But Kloor’s goal was not to understand the problem of farmer suicide, but rather to use it to whip up hatred toward Vandana Shiva and “liberal and environmentalist circles,” where GMOs are unpopular. The intent was to turn a complex social science question into a moral fable. Moral fables need villains (as Kloor himself notes), and egged on by Ron Herring, he uses the plight of Indian peasants to villainize Shiva, just as Shiva uses the peasants to villainize Monsanto.

Of course Shiva is wrong on Bt cotton killing farmers, as is Patrick Moore’s hysterical charge that Golden Rice critics are murdering Asian kids. For GMO brawlers like Kloor and Moore and Shiva, the aim is to enflame the like-minded, and hopefully to spread “motivated reasoning” to the undecided. Motivated reasoners use low standards of proof for claims they like, high standards for ones they don’t, and fixate on trashing opponents’ weakest arguments instead of actually considering their strongest. Villainization encourages motivated reasoning, and then charges of murder by the likes of Shiva and Moore really clear the benches.

In other writing Kloor calls GMO opponents unscientific. However, I would suggest that it is articles like this, which bash one side’s irresponsible claims but not the other’s, and which aim to create exasperation rather than insight, that are the real impediments to the scientific understanding of our world.

Glenn Davis Stone

Professor of anthropology and

environmental studies

Washington University in St. Louis

St. Louis, MO

Eagle

The long, fat freighter glided into the harbor at late morning—not the best time for a woman who had to keep out of sight.

The sun slowly slid up the sky as tugboats drew them into Anchorage. The tank ship, a big sectioned VLCC, was like an elephant ballerina on the stage of a slate-blue sea, attended by tiny dancing tugs.

Now off duty, Elinor watched the pilot bring them in past the Nikiski Narrows and slip into a long pier with gantries like skeletal arms snaking down, the big pump pipes attached. They were ready for the hydrogen sulfide to flow. The ground crew looked anxious, scurrying around, hooting and shouting. They were behind schedule.

Inside, she felt steady, ready to destroy all this evil stupidity.

She picked up her duffel bag, banged a hatch shut, and walked down to the shore desk. Pier teams in gasworkers’ masks were hooking up pumps to offload and even the faint rotten egg stink of the hydrogen sulfide made her hold her breath. The Bursar checked her out, reminding her to be back within 28 hours. She nodded respectfully, and her maritime ID worked at the gangplank checkpoint without a second glance. The burly guy there said something about hitting the bars and she wrinkled her nose. “For breakfast?”

“I seen it, ma’am,” he said, and winked.

She ignored the other crew, solid merchant marine types. She had only used her old engineer’s rating to get on this freighter, not to strike up the chords of the Seamen’s Association song.

She hit the pier and boarded the shuttle to town, jostling onto the bus, anonymous among boat crews eager to use every second of shore time. Just as she’d thought, this was proving the best way to get in under the security perimeter. No airline manifest, no Homeland Security ID checks. In the unloading, nobody noticed her, with her watch cap pulled down and baggy jeans. No easy way to even tell she was a woman.

Now to find a suitably dingy hotel. She avoided central Anchorage and kept to the shoreline, where small hotels from the TwenCen still did business. At a likely one on Sixth Avenue, the desk clerk told her there were no rooms left.

“With all the commotion at Elmendorf, ever’ damn billet in town’s packed,” the grizzled guy behind the counter said.

She looked out the dirty window, pointed. “What’s that?”

“Aw, that bus? Well, we’re gettin’ that ready to rent, but—”

“How about half price?”

“You don’t want to be sleeping in that—”

“Let me have it,” she said, slapping down a $50 bill.

“Uh, well.” He peered at her. “The owner said—”

“Show it to me.”

She got him down to $25 when she saw that it really was a “retired bus.” Something about it she liked, and no cops would think of looking in the faded yellow wreck. It had obviously fallen on hard times after it had served the school system.

It held a jumble of furniture, apparently to give it a vaguely homelike air. The driver’s seat and all else were gone, leaving holes in the floor. The rest was an odd mix of haste and taste. A walnut Victorian love seat with a medallion backrest held the center, along with a lumpy bed. Sagging upholstery and frayed cloth, cracked leather, worn wood, chipped veneer, a radio with the knobs askew, a patched-in shower closet, and an enamel basin toilet illuminated with a warped lamp completed the sad tableau. A generator chugged outside as a clunky gas heater wheezed. Authentic, in its way.

Restful, too. She pulled on latex gloves the moment the clerk left, and took a nap, knowing she would not soon sleep again. No tension, no doubts. She was asleep in minutes.

Time for the recon. At the rental place she’d booked, she picked up the wastefully big Ford SUV. A hybrid, though. No problem with the credit card, which looked fine at first use, then erased its traces with a virus that would propagate in the rental system, snipping away all records.

The drive north took her past the air base but she didn’t slow down, just blended in with late afternoon traffic. Signs along the highway now had to warn about polar bears, recent migrants to the land and even more dangerous than the massive local browns. The terrain was just as she had memorized it on Google Earth, the likely shooting spots isolated, thickly wooded. The Internet maps got the seacoast wrong, though. Two Inuit villages had recently sprung up along the shore within Elmendorf, as one of their people, posing as a fisherman, had observed and photographed. Studying the pictures, she’d thought they looked slightly ramshackle, temporary, hastily thrown up in the exodus from the tundra regions. No need to last, as the Inuit planned to return north as soon as the Arctic cooled. The makeshift living arrangements had been part of the deal with the Arctic Council for the experiments to make that possible. But access to post schools, hospitals, and the PX couldn’t make this home to the Inuit, couldn’t replace their “beautiful land,” as the word used by the Labrador peoples named it.

So, too many potential witnesses there. The easy shoot from the coast was out. She drove on. The enterprising Inuit had a brand new diner set up along Glenn Highway, offering breakfast anytime to draw odd-houred Elmendorf workers, and she stopped for coffee. Dark men in jackets and jeans ate solemnly in the booths, not saying much. A young family sat across from her, the father trying to eat while bouncing his small wiggly daughter on one knee, the mother spooning eggs into a gleefully uncooperative toddler while fielding endless questions from her bespectacled school-age son. The little girl said something to make her father laugh, and he dropped a quick kiss on her shining hair. She cuddled in, pleased with herself, clinging tight as a limpet.

They looked harried but happy, close-knit and complete. Elinor flashed her smile, tried striking up conversations with the tired, taciturn workers, but learned nothing useful from any of them.

Going back into town, she studied the crews working on planes lined up at Elmendorf. Security was heavy on roads leading into the base so she stayed on Glenn. She parked the Ford as near the railroad as she could and left it. Nobody seemed to notice.

At seven, the sun still high overhead, she came down the school bus steps, a new creature. She swayed away in a long-skirted yellow dress with orange Mondrian lines, her shoes casual flats, carrying a small orange handbag. Brushed auburn hair, artful makeup, even long artificial eyelashes. Bait.

She walked through the scruffy district off K Street, observing as carefully as on her morning reconnaissance. The second bar was the right one. She looked over her competition, reflecting that for some women, there should be a weight limit for the purchase of spandex. Three guys with gray hair were trading lies in a booth and checking her out. The noisiest of them, Ted, got up to ask her if she wanted a drink. Of course she did, though she was thrown off by his genial warning, “Lady, you don’t look like you’re carryin’.”

Rattled—had her mask of harmless approachability slipped?—she made herself smile, and ask, “Should I be?”

“Last week a brown bear got shot not two blocks from here, goin’ through trash. The polars are bigger, meat-eaters, chase the young males out of their usual areas, so they’re gettin’ hungry, and mean. Came at a cop, so the guy had to shoot it. It sent him to the ICU, even after he put four rounds in it.” Not the usual pickup line, but she had them talking about themselves. Soon, she had most of what she needed to know about SkyShield.

“We were all retired refuel jockeys,” Ted said. “Spent most of 30 years flyin’ up big tankers full of jet fuel, so fighters and B-52s could keep flyin’, not have to touch down.”

Elinor probed, “So now you fly—”

“Same aircraft, most of ’em 40 years old—KC Stratotankers, or Extenders—they extend flight times, y’see.”

His buddy added, “The latest replacements were delivered just last year, so the crates we’ll take up are obsolete. Still plenty good enough to spray this new stuff, though.”

“I heard it was poison,” she said.

“So’s jet fuel,” the quietest one said. “But it’s cheap, and they needed something ready to go now, not that dust-scatter idea that’s still on the drawing board.”

Ted snorted. “I wish they’d gone with dustin’—even the traces you smell when they tank up stink like rottin’ eggs. More than a whiff, though, and you’re already dead. God, I’m sure glad I’m not a tank tech.”

“It all starts tomorrow?” Elinor asked brightly.

“Right, 10 KCs takin’ off per day, returnin’ the next from Russia. Lots of big-ticket work for retired duffers like us.”

“Who’re they?” she asked, gesturing to the next table. She had overheard people discussing nozzles and spray rates. “Expert crew,” Ted said. “They’ll ride along to do the measurements of cloud formation behind us, check local conditions like humidity and such.”

She eyed them. All very earnest, some a tad professorial. They were about to go out on an exciting experiment, ready to save the planet, and the talk was fast, eyes shining, drinks all around.

“Got to freshen up, boys.” She got up and walked by the tables, taking three quick shots in passing of the whole lot of them, under cover of rummaging through her purse. Then she walked around a corner toward the rest rooms, and her dress snagged on a nail in the wooden wall. She tried to tug it loose, but if she turned to reach the snag, it would rip the dress further. As she fished back for it with her right hand, a voice said, “Let me get that for you.”

Not a guy, but one of the women from the tech table. She wore a flattering blouse with comfortable, well-fitted jeans, and knelt to unhook the dress from the nail head.

“Thanks,” Elinor said, and the woman just shrugged, with a lopsided grin.

“Girls should stick together here,” the woman said. “The guys can be a little rough.”

“Seem so.”

“Been here long? You could join our group—always room for another woman, up here! I can give you some tips, introduce you to some sweet, if geeky, guys.”

“No, I… I don’t need your help.” Elinor ducked into the women’s room.

She thought about this unexpected, unwanted friendliness while sitting in the stall, and put it behind her. Then she went back into the game, fishing for information in a way she hoped wasn’t too obvious. Everybody likes to talk about their work, and when she got back to the pilots’ table, the booze worked in her favor. She found out some incidental information, probably not vital, but it was always good to know as much as you could. They already called the redesigned planes “Scatter Ships” and their affection for the lumbering, ungainly aircraft was reflected in banter about unimportant engineering details and tales of long-ago combat support missions.

One of the big guys with a wide grin sliding toward a leer was buying her a second martini when her cell rang.

“Albatross okay. Our party starts in 30 minutes,” said a rough voice. “You bring the beer.”

She didn’t answer, just muttered, “Damned salesbots…,” and disconnected.

She told the guy she had to “tinkle,” which made him laugh. He was a pilot just out of the Air Force, and she would have gone for him in some other world than this one. She found the back exit—bars like this always had one—and was blocks away before he would even begin to wonder.

THIS MIGHT BE THE LAST TIME SHE WOULD SEE SUCH ABUNDANT, GLOWING LIFE.

Anchorage slid past unnoticed as she hurried through the broad deserted streets, planning. Back to the bus, out of costume, into all-weather gear, boots, grab some trail mix and an already-filled backpack. Her thermos of coffee she wore on her hip.

She cut across Elderberry Park, hurrying to the spot where her briefing said the trains paused before running into the depot. The port and rail lines snugged up against Elmendorf Air Force Base, convenient for them, and for her.

The freight train was a long clanking string and she stood in the chill gathering darkness, wondering how she would know where they were. The passing autorack cars had heavy shutters, like big steel Venetian blinds, and she could not see how anybody got into them.

But as the line clanked and squealed and slowed, a quick laser flash caught her, winked three times. She ran toward it, hauling up onto a slim platform at the foot of a steel sheet.

It tilted outward as she scrambled aboard, thudding into her thigh, nearly knocking her off. She ducked in and saw by the distant streetlights the vague outlines of luxury cars. A Lincoln sedan door swung open. Its interior light came on and she saw two men in the front seats. She got in the back and closed the door. Utter dark.

“It clear out there?” the cell phone voice asked from the driver’s seat.

“Yeah. What—”

“Let’s unload. You got the SUV?”

“Waiting on the nearest street.”

“How far?”

“Hundred meters.”

The man jigged his door open, glanced back at her. “We can make it in one trip if you can carry 20 kilos.”

“Sure,” though she had to pause to quickly do the arithmetic, 44 pounds. She had backpacked about that much for weeks in the Sierras. “Yeah, sure.”

The missile gear was in the trunks of three other sedans, at the far end of the autorack. As she climbed out of the car the men had inhabited, she saw the debris of their trip—food containers in the back seats, assorted junk, the waste from days spent coming up from Seattle. With a few gallons of gas in each car, so they could be driven on and off, these two had kept warm running the heater. If that ran dry, they could switch to another.

As she understood it, this degree of mess was acceptable to the railroads and car dealers. If the railroad tried to wrap up the autoracked cars to keep them out, the bums who rode the rails would smash windshields to get in, then shit in the cars, knife the upholstery. So they had struck an equilibrium. That compromise inadvertently produced a good way to ship weapons right by Homeland Security. She wondered what Homeland types would make of a Dart, anyway. Could they even tell what it was?

The rough-voiced man turned and clicked on a helmet lamp. “I’m Bruckner. This is Gene.”

Nods. “I’m Elinor.” Nods, smiles. Cut to the chase. “I know their flight schedule.”

Bruckner smiled thinly. “Let’s get this done.”

Transporting the parts in via autoracked cars was her idea. Bringing them in by small plane was the original plan, but Homeland might nab them at the airport. She was proud of this slick workaround.

“Did railroad inspectors get any of you?” Elinor asked.

Gene said, “Nope. Our two extras dropped off south of here. They’ll fly back out.”

With the auto freights, the railroad police looked for tramps sleeping in the seats. No one searched the trunks. So they had put a man on each autorack, and if some got caught, they could distract from the gear. The men would get a fine, be hauled off for a night in jail, and the shipment would go on.

“Luck is with us,” Elinor said. Bruckner looked at her, looked more closely, opened his mouth, but said nothing.

They both seemed jumpy by the helmet light. “How’d you guys live this way?” she asked, to get them relaxed.

“Pretty poorly,” Gene said. “We had to shit in bags.”

She could faintly smell the stench. “More than I need to know.”

Using Bruckner’s helmet light they hauled the assemblies out, neatly secured in backpacks. Bruckner moved with strong, graceless efficiency. Gene too. She hoisted hers on, grunting.

The freight started up, lurching forward. “Damn!” Gene said.

They hurried. When they opened the steel flap, she hesitated, jumped, stumbled on the gravel, but caught herself. Nobody within view in the velvet cloaking dusk.

AND SHE SUCKED IT IN, TRYING TO LODGE IT IN HER HEART FOR TIMES TO COME.

They walked quietly, keeping steady through the shadows. It got cold fast, even in late May. At the Ford they put the gear in the back and got in. She drove them to the old school bus. Nobody talked.

She stopped them at the steps to the bus. “Here, put these gloves on.”

They grumbled but they did it. Inside, heater turned to high, Bruckner asked if she had anything to drink. She offered bottles of vitamin water but he waved it away. “Any booze?”

Gene said, “Cut that out.”

The two men eyed each other and Elinor thought about how they’d been days in those cars and decided to let it go. Not that she had any liquor, anyway.

Bruckner was lean, rawboned, and self-contained, with minimal movements and a constant, steady gaze in his expressionless face. “I called the pickup boat. They’ll be waiting offshore near Eagle Bay by eight.”

Elinor nodded. “First flight is 9:00 a.m.. It’ll head due north so we’ll see it from the hills above Eagle Bay.”

Gene said, “So we get into position… when?”

“Tonight, just after dawn.”

Bruckner said, “I do the shoot.”

“And we handle perimeter and setup, yes.”

“How much trouble will we have with the Indians?”

Elinor blinked. “The Inuit settlement is down by the seashore. They shouldn’t know what’s up.”

Bruckner frowned. “You sure?”

“That’s what it looks like. Can’t exactly go there and ask, can we?”

Bruckner sniffed, scowled, looked around the bus. “That’s the trouble with this nickel-and-dime operation. No real security.”

Elinor said, “You want security, buy a bond.”

Bruckner’s head jerked around. “Whassat mean?”

She sat back, took her time. “We can’t be sure the DARPA people haven’t done some serious public relations work with the Natives. Besides, they’re probably all in favor of SkyShield anyway—their entire way of life is melting away with the sea ice. And by the way, they’re not “Indians,” they’re “Inuit.”

“You seem pretty damn sure of yourself.”

“People say it’s one of my best features.”

Bruckner squinted and said, “You’re—”

“A maritime engineering officer. That’s how I got here and that’s how I’m going out.”

“You’re not going with us?”

“Nope, I go back out on my ship. I have first engineering watch tomorrow, 0100 hours.” She gave him a hard, flat look. “We go up the inlet, past Birchwood Airport. I get dropped off, steal a car, head south to Anchorage, while you get on the fishing boat, they work you out to the headlands. The bigger ship comes in, picks you up. You’re clear and away.”

Bruckner shook his head. “I thought we’d—”

“Look, there’s a budget and—”

“We’ve been holed up in those damn cars for—”

“A week, I know. Plans change.”

“I don’t like changes.”

“Things change,” Elinor said, trying to make it mild.

But Bruckner bristled. “I don’t like you cutting out, leaving us—”

“I’m in charge, remember.” She thought, He travels the fastest who travels alone.

“I thought we were all in this together.”

She nodded. “We are. But Command made me responsible, since this was my idea.”

His mouth twisted. “I’m the shooter, I—”

“Because I got you into the Ecuador training. Me and Gene, we depend on you.” Calm, level voice. No need to provoke guys like this; they did it enough on their own.

Silence. She could see him take out his pride, look at it, and decide to wait a while to even the score.

Bruckner said, “I gotta stretch my legs,” and clumped down the steps and out of the bus.

Elinor didn’t like the team splitting and thought of going after him. But she knew why Bruckner was antsy—too much energy with no outlet. She decided just to let him go.

To Gene she said, “You’ve known him longer. He’s been in charge of operations like this before?”

Gene thought. “There’ve been no operations like this.”

“Smaller jobs than this?”

“Plenty.”

She raised her eyebrows. “Surprising.”

“Why?”

“He walks around using that mouth, while he’s working?”

Gene chuckled. “ ’Fraid so. He gets the job done though.”

“Still surprising.”

That he’s the shooter, or—”

“That he still has all his teeth.

While Gene showered, she considered. Elinor figured Bruckner for an injustice collector, the passive-aggressive loser type. But he had risen quickly in The LifeWorkers, as they called themselves, brought into the inner cadre that had formulated this plan. Probably because he was willing to cross the line, use violence in the cause of justice. Logically, she should sympathize with him, because he was a lot like her.

But sympathy and liking didn’t work that way.

There were people who soon would surely yearn to read her obituary, and Bruckner’s too, no doubt. He and she were the cutting edge of environmental activism, and these were desperate times indeed. Sometimes you had to cross the line, and be sure about it.

Elinor had made a lot of hard choices. She knew she wouldn’t last long on the scalpel’s edge of active environmental justice, and that was fine by her. Her role would soon be to speak for the true cause. Her looks, her brains, her charm—she knew she’d been chosen for this mission, and the public one afterward, for these attributes, as much as for the plan she had devised. People listen, even to ugly messages, when the face of the messenger is pretty. And once they finished here, she would have to be heard.

She and Gene carefully unpacked the gear and started to assemble the Dart. The parts connected with a minimum of wiring and socket clasps, as foolproof as possible. They worked steadily, assembling the tube, the small recoil-less charge, snapping and clicking the connections.

Gene said, “The targeting antenna has a rechargeable battery, they tend to drain. I’ll top it up.”

She nodded, distracted by the intricacies of a process she had trained for a month ago. She set the guidance system. Tracking would first be infrared only, zeroing in on the target’s exhaust, but once in the air and nearing its goal, it would use multiple targeting modes—laser, IR, advanced visual recognition—to get maximal impact on the main body of the aircraft.

They got it assembled and stood back to regard the linear elegance of the Dart. It had a deadly, snakelike beauty, its shiny white skin tapered to a snub point.

“Pretty, yeah,” Gene said. “And way better than any Stinger. Next generation, smarter, near four times the range.”

She knew guys liked anything that could shoot, but to her it was just a tool. She nodded.

Gene caressed the lean body of the Dart, and smiled.

Bruckner came clumping up the bus stairs with a fixed smile on his face that looked like it had been delivered to the wrong address. He waved a lit cigarette. Elinor got up, forced herself to smile. “Glad you’re back, we—”

“Got some ’freshments,” he said, dangling some beers in their six-pack plastic cradle, and she realized he was drunk.

The smile fell from her face like a picture off a wall.

She had to get along with these two, but this was too much. She stepped forward, snatched the beer bottles and tossed them onto the Victorian love seat. “No more.”

Bruckner tensed and Gene sucked in a breath. Bruckner made a move to grab the beers and Elinor snatched his hand, twisted the thumb back, turned hard to ward off a blow from his other hand—and they froze, looking into each other’s eyes from a few centimeters away.

Silence.

Gene said, “She’s right, y’know.”

More silence.

Bruckner sniffed, backed away. “You don’t have to be rough.”

“I wasn’t.”

They looked at each other, let it go.

She figured each of them harbored a dim fantasy of coming to her in the brief hours of darkness. She slept in the lumpy bed and they made do with the furniture. Bruckner got the love seat—ironic victory—and Gene sprawled on a threadbare comforter.

Bruckner talked some but dozed off fast under booze, so she didn’t have to endure his testosterone-fueled patter. But he snored, which was worse.

The men napped and tossed and worried. No one bothered her, just as she wanted it. But she kept a small knife in her hand, in case. For her, sleep came easily.

After eating a cold breakfast, they set out before dawn, 2:30 a.m., Elinor driving. She had decided to wait till then because they could mingle with early morning Air Force workers driving toward the base. This far north, it started brightening by 3:30, and they’d be in full light before 5:00. Best not to stand out as they did their last reconnaissance. It was so cold she had to run the heater for five minutes to clear the windshield of ice. Scraping with her gloved hands did nothing.

The men had grumbled about leaving absolutely nothing behind. “No traces,” she said. She wiped down every surface, even though they’d worn medical gloves the whole time in the bus.

Gene didn’t ask why she stopped and got a gas can filled with gasoline, and she didn’t say. She noticed the wind was fairly strong and from the north, and smiled. “Good weather. Prediction’s holding up.”

Bruckner said sullenly, “Goddamn cold.”

“The KC Extenders will take off into the wind, head north.” Elinor judged the nearly cloud-free sky. “Just where we want them to be.”

They drove up a side street in Mountain View and parked overlooking the fish hatchery and golf course, so she could observe the big tank refuelers lined up at the loading site. She counted five KC-10 Extenders, freshly surplussed by the Air Force. Their big bellies reminded her of pregnant whales.

From their vantage point, they could see down to the temporarily expanded checkpoint, set up just outside the base. As foreseen, security was stringently tight this near the airfield—all drivers and passengers had to get out, be scanned, IDs checked against global records, briefcases and purses searched. K-9 units inspected car interiors and trunks. Explosives-detecting robots rolled under the vehicles.

She fished out binoculars and focused on the people waiting to be cleared. Some carried laptops and backpacks and she guessed they were the scientists flying with the dispersal teams. Their body language was clear. Even this early, they were jazzed, eager to go, excited as kids on a field trip. One of the pilots had mentioned there would be some sort of preflight ceremony, honoring the teams that had put all this together. The flight crews were studiedly nonchalant—this was an important, high-profile job, sure, but they couldn’t let their cool down in front of so many science nerds. She couldn’t see well enough to pick out Ted, or the friendly woman from the bar.

In a special treaty deal with the Arctic Council, they would fly from Elmendorf and arc over the North Pole, spreading hydrogen sulfide in their wakes. The tiny molecules of it would mate with water vapor in the stratospheric air, making sulfurics. Those larger, wobbly molecules reflected sunlight well—a fact learned from studying volcano eruptions back in the TwenCen. Spray megatons of hydrogen sulfide into the stratosphere, let water turn it into a sunlight-bouncing sheet—SkyShield—and they could cool the entire Arctic.

Or so the theory went. The Arctic Council had agreed to this series of large-scale experiments, run by the USA since they had the in-flight refuelers that could spread the tiny molecules to form the SkyShield. Small-scale experiments—opposed, of course, by many enviros—had seemed to work. Now came the big push, trying to reverse the retreat of sea ice and warming of the tundra.

Anchorage lay slightly farther north than Oslo, Helsinki, and Stockholm, but not as far north as Reykjavik or Murmansk. Flights from Anchorage to Murmansk would let them refuel and reload hydrogen sulfide at each end, then follow their paths back over the pole. Deploying hydrogen sulfide along their flight paths at 45,000 feet, they would spread a protective layer to reflect summer sunlight. In a few months, the sulfuric droplets would ease down into the lower atmosphere, mix with moist clouds, and come down as rain or snow, a minute, undetectable addition to the acidity already added by industrial pollutants. Experiment over.

The total mass delivered was far less than that from volcanoes like Pinatubo, which had cooled the whole planet in 1991–92. But volcanoes do messy work, belching most of their vomit into the lower atmosphere. This was to be a designer volcano, a thin skin of aerosols skating high across the stratosphere.

It might stop the loss of the remaining sea ice, the habitat of the polar bear. Only 10% of the vast original cooling sheets remained. Equally disruptive changes were beginning to occur in other parts of the world.

But geoengineered tinkerings would also be a further excuse to delay cutbacks in carbon dioxide emissions. People loved convenience, their air conditioning and winter heating and big lumbering SUVs. Humanity had already driven the air’s CO2 content to twice what it was before 1800, and with every developing country burning oil and coal as fast as they could extract them, only dire emergency could drive them to abstain. To do what was right.

The greatest threat to humanity arose not from terror, but error. Time to take the gloves off.

She put the binocs away and headed north. The city’s seacoast was mostly rimmed by treacherous mudflats, even after the sea kept rising. Still, there were coves and sandbars of great beauty. Elinor drove off Glenn Highway to the west, onto progressively smaller, rougher roads, working their way backcountry by Bureau of Land Management roads to a sagging, long-unused access gate for loggers. Bolt cutters made quick work of the lock securing its rusty chain closure. After she pulled through, Gene carefully replaced the chain and linked it with an equally rusty padlock, brought for this purpose. Not even a thorough check would show it had been opened, till the next time BLM tried to unlock it. They were now on Elmendorf, miles north of the airfield, far from the main base’s bustle and security precautions. Thousands of acres of mudflats, woods, lakes, and inlet shoreline lay almost untouched, used for military exercises and not much else. Nobody came here except for infrequent hardy bands of off-duty soldiers or pilots, hiking with maps red-marked UXO for “Unexploded Ordnance.” Lost live explosives, remnants of past field maneuvers, tended to discourage casual sightseers and trespassers, and the Inuit villagers wouldn’t be berry-picking till July and August. She consulted her satellite map, then took them on a side road, running up the coast. They passed above a cove of dark blue waters.

Beauty. Pure and serene.

The sea-level rise had inundated many of the mudflats and islands, but a small rocky platform lay near shore, thick with trees. Driving by, she spotted a bald eagle perched at the top of a towering spruce tree. She had started birdwatching as a Girl Scout and they had time; she stopped.

She left the men in the Ford and took out her long-range binocs. The eagle was grooming its feathers and eyeing the fish rippling the waters offshore. Gulls wheeled and squawked, and she could see sea lions knifing through fleeing shoals of herring, transient dark islands breaking the sheen of waves. Crows joined in onshore, hopping on the rocks and pecking at the predators’ leftovers.

She inhaled the vibrant scent of ripe wet salty air, alive with what she had always loved more than any mere human. This might be the last time she would see such abundant, glowing life, and she sucked it in, trying to lodge it in her heart for times to come.

She was something of an eagle herself, she saw now, as she stood looking at the elegant predator. She kept to herself, loved the vibrant natural world around her, and lived by making others pay the price of their own foolishness. An eagle caught hapless fish. She struck down those who would do evil to the real world, the natural one.

Beyond politics and ideals, this was her reality.

Then she remembered what else she had stopped for. She took out her cell phone and pinged the alert number.

A buzz, then a blurred woman’s voice. “Able Baker.”

“Confirmed. Get a GPS fix on us now. We’ll be here, same spot, for pickup in two to three hours. Assume two hours.”

Buzz buzz. “Got you fixed. Timing’s okay. Need a Zodiac?”

“Yes, definite, and we’ll be moving fast.”

“You bet. Out.”

Back in the cab, Bruckner said, “What was that for?”

“Making the pickup contact. It’s solid.”

“Good. But I meant, what took so long.”

She eyed him levelly. “A moment spent with what we’re fighting for.”

Bruckner snorted. “Let’s get on with it.”

Elinor looked at Bruckner and wondered if he wanted to turn this into a spitting contest just before the shoot.

“Great place,” Gene said diplomatically.

That broke the tension and she started the Ford.

They rose further up the hills northeast of Anchorage, and at a small clearing, she pulled off to look over the landscape. To the east, mountains towered in lofty gray majesty, flanks thick with snow. They all got out and surveyed the terrain and sight angles toward Anchorage. The lowlands were already thick with summer grasses, and the winds sighed southward through the tall evergreens.

Gene said, “Boy, the warming’s brought a lot of growth.”

Elinor glanced at her watch and pointed. “The KCs will come from that direction, into the wind. Let’s set up on that hillside.”

They worked around to a heavily wooded hillside with a commanding view toward Elmendorf Air Force Base. “This looks good,” Bruckner said, and Elinor agreed.

“Damn—a bear!” Gene cried.

They looked down into a narrow canyon with tall spruce. A large brown bear was wandering along a stream about a hundred meters away.

Elinor saw Bruckner haul out a .45 automatic. He cocked it.

When she glanced back the bear was looking toward them. It turned and started up the hill with lumbering energy.

“Back to the car,” she said.

The bear broke into a lope.

Bruckner said, “Hell, I could just shoot it. This is a good place to see the takeoff and—”

“No. We move to the next hill.”

Bruckner said, “I want—”

“Go!”

They ran.

One hill farther south, Elinor braced herself against a tree for stability and scanned the Elmendorf landing strips. The image wobbled as the air warmed across hills and marshes.

Lots of activity. Three KC-10 Extenders ready to go. One tanker was lined up on the center lane and the other two were moving into position.

“Hurry!” she called to Gene, who was checking the final setup menu and settings on the Dart launcher.

He carefully inserted the missile itself in the launcher. He checked, nodded and lifted it to Bruckner. They fitted the shoulder straps to Bruckner, secured it, and Gene turned on the full arming function. “Set!” he called.

Elinor saw a slight stirring of the center Extender and it began to accelerate. She checked: right on time, 0900 hours. Hard-core military like Bruckner, who had been a Marine in the Middle East, called Air Force the “saluting Civil Service,” but they did hit their markers. The Extenders were not military now, just surplus, but flying giant tanks of sloshing liquid around the stratosphere demands tight standards.

“I make the range maybe 20 kilometers,” she said. “Let it pass over us, hit it close as it goes away.”

Bruckner grunted, hefted the launcher. Gene helped him hold it steady, taking some of the weight. Loaded, it weighed nearly 50 pounds. The Extender lifted off, with a hollow, distant roar that reached them a few seconds later, and Elinor could see that media coverage was high. Two choppers paralleled the takeoff for footage, then got left behind.

The Extender was a full-extension DC-10 airframe and it came nearly straight toward them, growling through the chilly air. She wondered if the chatty guy from the bar, Ted, was one of the pilots. Certainly, on a maiden flight the scientists who ran this experiment would be on board, monitoring performance. Very well.

“Let it get past us,” she called to Bruckner.

He took his head from the eyepiece to look at her. “Huh? Why—”

“Do it. I’ll call the shot.”

“But I’m—”

“Do it.”

The airplane was rising slowly and flew by them a few kilometers away.

“Hold, hold…” she called. “Fire.”

Bruckner squeezed the trigger and the missile popped out—whuff!—seemed to pause, then lit. It roared away, startling in its speed—straight for the exhausts of the engines, then correcting its vectors, turning, and rushing for the main body. Darting.

It hit with a flash and the blast came rolling over them. A plume erupted from the airplane, dirty black.

“Bruckner! Resight—the second plane is taking off.”

She pointed. Gene chunked the second missile into the Dart tube. Bruckner swiveled with Gene’s help. The second Extender was moving much too fast, and far too heavy, to abort takeoff.

The first airplane was coming apart, rupturing. A dark cloud belched across the sky.

Elinor said clearly, calmly, “The Dart’s got a max range about right so… shoot.”

Bruckner let fly and the Dart rushed off into the sky, turned slightly as it sighted, accelerated like an angry hornet. They could hardly follow it. The sky was full of noise.

“Drop the launcher!” she cried.

“What?” Bruckner said, eyes on the sky.

She yanked it off him. He backed away and she opened the gas can as the men watched the Dart zooming toward the airplane. She did not watch the sky as she doused the launcher and splashed gas on the surrounding brush.

“Got that lighter?” she asked Bruckner.

He could not take his eyes off the sky. She reached into his right pocket and took out the lighter. Shooters had to watch, she knew.

She lit the gasoline and it went up with a whump.

“Hey! Let’s go!” She dragged the men toward the car.

They saw the second hit as they ran for the Ford. The sound got buried in the thunder that rolled over them as the first Extender hit the ground kilometers away, across the inlet. The hard clap shook the air, made Gene trip, then stagger forward.

She started the Ford and turned away from the thick column of smoke rising from the launcher. It might erase any fingerprints or DNA they’d left, but it had another purpose too.

She took the run back toward the coast at top speed. The men were excited, already reliving the experience, full of words. She said nothing, focused on the road that led them down to the shore. To the north, a spreading dark pall showed where the first plane went down.

One glance back at the hill told her the gasoline had served as a lure. A chopper was hammering toward the column of oily smoke, buying them some time.

The men were hooting with joy, telling each other how great it had been. She said nothing.

She was happy in a jangling way. Glad she’d gotten through without the friction with Bruckner coming to a point, too. Once she’d been dropped off, well up the inlet, she would hike around a bit, spend some time birdwatching, exchange horrified words with anyone she met about that awful plane crash—No, I didn’t actually see it, did you?—and work her way back to the freighter, slipping by Elmendorf in the chaos that would be at crescendo by then. Get some sleep, if she could.

They stopped above the inlet, leaving the Ford parked under the thickest cover they could find. She looked for the eagle, but didn’t see it. Frightened skyward by the bewildering explosions and noises, no doubt. They ran down the incline. She thumbed on her comm, got a crackle of talk, handed it to Bruckner. He barked their code phrase, got confirmation.

A Zodiac was cutting a V of white, homing in on the shore. The air rumbled with the distant beat of choppers and jets, the search still concentrated around the airfield. She sniffed the rotten egg smell, already here from the first Extender. It would kill everything near the crash, but this far off should be safe, she thought, unless the wind shifted. The second Extender had gone down closer to Anchorage, so it would be worse there. She put that out of her mind.

Elinor and the men hurried down toward the shore to meet the Zodiac. Bruckner and Gene emerged ahead of her as they pushed through a stand of evergreens, running hard. If they got out to the pickup craft, then suitably disguised among the fishing boats, they might well get away.

But on the path down, a stocky Inuit man stood. Elinor stopped, dodged behind a tree.

Ahead of her, Bruckner shouted, “Out of the way!”

The man stepped forward, raised a shotgun. She saw something compressed and dark in his face.

“You shot down the planes?” he demanded.

A tall Inuit racing in from the side shouted, “I saw their car comin’ from up there!”

Bruckner slammed to a stop, reached down for his .45 automatic—and froze. The double-barreled shotgun could not miss at that range.

It had happened so fast. She shook her head, stepped quietly away. Her pulse hammered as she started working her way back to the Ford, slipping among the trees. The soft loam kept her footsteps silent.

A third man came out of the trees ahead of her. She recognized him as the young Inuit father from the diner, and he cradled a black hunting rifle. “Stop!”

She stood still, lifted her binocs. “I’m bird watching, what—”

“I saw you drive up with them.”

A deep, brooding voice behind her said, “Those planes were going to stop the warming, save our land, save our people.”

She turned to see another man pointing a large caliber rifle. “I, I, the only true way to do that is by stopping the oil companies, the corporations, the burning of fossil—”

The shotgun man, eyes burning beneath heavy brows, barked, “What’ll we do with ‘em?”

She talked fast, hands up, open palms toward him. “All that SkyShield nonsense won’t stop the oceans from turning acid. Only fossil—”

“Do what you can, when you can. We learn that up here.” This came from the tall man. The Inuit all had their guns trained on them now. The tall man gestured with his and they started herding the three of them into a bunch. The men’s faces twitched, fingers trembled.

The man with the shotgun and the man with the rifle exchanged nods, quick words in a complex, guttural language she could not understand. The rifleman seemed to dissolve into the brush, steps fast and flowing, as he headed at a crouching dead run down to the shoreline and the waiting Zodiac.

She sucked in the clean sea air and could not think at all. These men wanted to shoot all three of them and so she looked up into the sky to not see it coming. High up in a pine tree with a snapped top an eagle flapped down to perch. She wondered if this was the one she had seen before.

The oldest of the men said, “We can’t kill them. Let ‘em rot in prison.”

The eagle settled in. Its sharp eyes gazed down at her and she knew this was the last time she would ever see one. No eagle would ever live in a gray box. But she would. And never see the sky.

From the Hill – Spring 2014

Administration releases FY 2015 budget request

President Obama’s FY 2015 budget proposal totals $3.9 trillion, of which roughly 63% is mandatory spending such as Social Security payments, roughly 30% is discretionary spending, and the rest is net interest. By comparison, in FY 2010 the split was 55% mandatory and 39% discretionary.

The expected deficit is pegged at $564 billion, a decrease from last year. The budget matches the $1.014 trillion discretionary spending cap agreed to by Congress in December, although the president has proposed an additional $56 billion in discretionary spending on top of this cap, via what’s being called the Opportunity, Growth, and Security Initiative.

Total R&D funding would amount to $135.4 billion, an increase of $1.7 billion or 1.2% above FY 2014 levels. This also represents a $5 billion or 3.9% increase above FY 2013 sequester levels. This does not account for the expected inflation rate of 1.7% this year, which means that total R&D would actually decline slightly in inflation-adjusted dollars. Defense R&D would increase by 1.7% above FY 2014 levels, and nondefense R&D would increase by 0.7%. This represents a departure from recent budgets, which have tended to be more generous to nondefense R&D at the expense of defense R&D. Among the agencies, the largest increases would occur within the Department of Energy, particularly within the National Nuclear Security Administration. On the nondefense side, energy efficiency, renewable energy, and ARPA-E would also fare well, relatively speaking. The U.S. Geological Survey and the Department of Commerce R&D agencies would also receive relatively large boosts. Outside of these few, no other departments would keep pace with inflation.

Source: Budget of the United States Government FY 2015. Projected deficit is $564 billion. © 2014 AAAS

The Opportunity, Growth, and Security Initiative would provide an additional $5.3 billion for R&D. This includes nearly $1 billion for NIH, over $500 million for NSF, and nearly $900 million for NASA. The initiative would also fund a national network of 45 manufacturing innovation institutes in partnership with industry. However, all of this additional funding would require Congress to raise the current discretionary spending cap or make some attempt to secure this additional R&D funding through cuts elsewhere.

National Science Foundation (NSF). The FY 2015 budget request for the NSF is $7.25 billion, an increase of 1.2% above the FY 2014 estimate. Of that request, Research and Related Activities would receive $5.72 billion, a decrease of $2 million from the FY 2014 estimate. According to NSF acting director Cora Marrett, the administration’s request, if funded by Congress, would help to support 11,000 research grant awards to 2,000 institutions and support 300,000 individual researchers.

National Institutes of Health (NIH). The NIH budget request is $30.3 billion, an increase of $200 million above FY 2014. According to Kathy Hudson, NIH deputy director for science, outreach and policy, the request would allow the agency to award 4% (329) more grants than in 2014 and 13% (1,092) more than in 2013. NIH is requesting $100 million (an increase of $60 million) for the BRAIN initiative to support the development of new tools for mapping brain circuitry and to measure brain activity. In addition, Hudson noted that NIH is requesting $30 million within the Common Fund to launch a new research program modeled after the Defense Advanced Research Projects Agency (DARPA).

U.S. Department of Agriculture (USDA). The USDA request is $2.4 billion for FY 2015, an increase of $29 million (1.2%) above the FY 2014 estimate. According to Catherine Woteki, USDA under secretary for research, education, and economics, the request includes $1.14 billion for the Agricultural Research Service; $1.5 billion for the National Institute of Food and Agriculture, including $325 million for the competitive Agriculture Food Research Initiative (AFRI); $83 million for the Economic Research Service; and $179 million for the National Agricultural Statistics Service. In addition, Woteki announced that the USDA is proposing the creation of three “Innovation Institutes” to be funded at $25 million annually per institute to focus on research in three critical areas: pollinator health, antimicrobial resistance, and a national network for bioproducts manufacturing innovation.

Source: OSTP. Red line reflects inflation, expected at 1.7 percent. Does not include additional funding proposed via Opportunity, Growth, and Security Initiative. © 2014 AAAS

Department of Energy (DOE). The president requested $27.9 billion for DOE, which represents a 2.6% increase above the FY 2014 enacted level. The Office of Science, which houses most of DOE’s fundamental science, would receive a 0.9% increase from FY 2014 enacted levels to $5.1 billion. Outside the Office of Science, priorities include ARPA-E, which would receive a 16.1% increase to $325 million, and energy efficiency/renewables, which would receive a 21.9% increase to $2.3 billion. Meanwhile, reactor research, fossil fuels research, and grid cybersecurity research would all experience moderate decreases in funding.

National Aeronautics and Space Administration (NASA). The budget requests $11.6 billion for NASA R&D, a 1% decrease from FY 2014 levels. Within NASA, the science and the aeronautics directorates would both be reduced from FY 2014 levels, as would development activities related to the Orion Crew Vehicle and the Space Launch System. The science budget would decline by 3.5% to $5 billion, with only the heliophysics program receiving an increase. Aeronautics would decline by 2.7% to $551 million. Activities would receive budget boosts include exploration systems R&D and the space technology directorate. All programs would benefit from the proposed Opportunity, Growth, and Security Initiative.

Science, Technology, Engineering, and Math (STEM) Education. The president’s FY 2015 budget also includes a request of $2.9 billion for federal agency STEM education programs. In addition, it includes a range of program consolidations and eliminations within nine federal departments and agencies. Overall, the budget request would consolidate or eliminate a total of 31 STEM programs for a total savings of $145 million. It was noted during the White House Office of Science and Technology Policy press conference that the consolidations were implemented within the federal agencies rather than transferring programs between agencies as was proposed last year. In addition, agencies are to continue to “coordinate to implement the federal STEM Education 5-Year Strategic Plan through the Committee on STEM Education (CoSTEM).”

Special Programs. The president’s budget funds three long-running interagency initiatives. The U.S. Global Change Research Program (USGCRP) would receive $2.5 billion, a 0.5% increase from estimated FY 2014 levels. USGCRP is a multi-agency program that coordinates federal research on climate change and its potential effects. The Networking and IT R&D (NITRD) Program, which receives its largest contributions from the Department of Defense, DOE, and NSF, would receive $3.8 billion, a 2.9% decrease from FY 2014 levels. The National Nanotechnology Initiative would remain unchanged from the FY 2014 level of $1.5 billion.

The newest interagency project, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, would see its FY 2014 funding doubled to $200 million in FY 2015. The program coordinates research within DARPA, NIH, NSF, and the Food and Drug Administration to understand brain function and potentially develop more effective treatments or prevention measures for various diseases of the brain. NIH will contribute $100 million in FY 2015, DARPA will contribute $80 million, and NSF will invest an additional $20 million.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

FY 2014 appropriations finalized

On January 17, the President signed the Consolidated Appropriations Act of 2014 (HR 3547), providing appropriations for the remainder of the 2014 fiscal year. The bill operates within the framework established by the December budget deal, with the overall discretionary spending limit of $1.012 trillion that rolls back half of the scheduled spending reductions following sequestration.

AAAS estimates place FY 2014 R&D at $136.2 trillion, 2.6% above FY 2013 estimates and 4.4% below FY 2012 levels. However, defense and nondefense R&D will move in opposite directions. Defense R&D will decline by 1.6% below FY 2013 post-sequester estimates and 10% below FY 2012 levels, and nondefense R&D will increase by 7.6% above FY 2013 post-sequester estimates and 2.5% above FY 2012 levels.

Under the omnibus spending bill, many R&D departments and agencies received at least a modest increase above sequester levels, and some fared better than expected. For instance, the Department of Energy’s (DOE) Office of Science, science and technology programs at the National Aeronautics and Space Administration, and research programs at the Department of Agriculture all ended up closer to the higher spending levels recommended by the Senate than the lower House figures, and DOE’s low-carbon energy technology programs avoided the stark cuts proposed by the House. However, the National Institutes of Health (NIH) will remain roughly $700 million below FY 2012 funding levels, and the U.S. Geological Survey and Environmental Protection Agency also do not return to 2012 funding. Even with some positive outcomes for certain agencies, overall federal R&D could drop to 0.80% of the gross domestic product, the lowest level since the end of World War II.

Also embedded in the FY 2014 omnibus appropriations bill is language regarding public access to research and government travel restrictions. Within the Labor-Health and Human Services (HHS) portion of the bill is language that would require researchers who receive federal funding from agencies funded via Labor-HHS to make their accepted research manuscripts publicly available within 12 months. The language extends the NIH policy that has existed for a number of years to agencies such as the Department of Education and the Centers for Disease Control and Prevention.

Elsewhere within the omnibus bill is language that would place a cap of no more than 50 federal employees attending international conferences. In addition, it would lay out a series of reporting requirements to improve transparency regarding government travel, including information regarding total costs of government-sponsored conferences and contracting procedures.

Book Review: Lighting the way

Lighting the way

A Short Bright Flash: Augustin Fresnel and the Birth of the Modern Lighthouse by Theresa Levitt. New York: W. W. Norton & Co., 2013, 288 pp.

Jody A. Roberts

In A Short Bright Flash: Augustin Fresnel and the Birth of the Modern Lighthouse, Theresa Levitt wastes little time in revealing just how bad things were for sailors in the 19th century. She begins with the grisly tale of la Méduse, a ship run aground off the coast of West Africa in 1817 and the abandonment, murder, and cannibalism that followed. Of the nearly 150 individuals who took refuge on a makeshift raft adrift in the Atlantic Ocean, only 15 survived the 15-day ordeal before being rescued. Not every shipwreck ended so dramatically, but with hundreds of boats lost each year (the British insurer Lloyd’s of London put the number at 362 in 1816 alone), the event and its awful ending struck a sensitive chord with an anxious public.

Thus, the scene was set for Fresnel’s invention of his eponymous lens, which proved to be a miraculous feat of mathematics, optics, and engineering. Lighthouse engineering held that a brighter light could be generated only by increasing the quantity of light available (more oil, more wicks) and reflecting that light with larger reflectors. Fresnel, a French physicist and engineer, challenged this approach by first arguing that the amount of light lost in the system needed just as much attention as the amount of light produced and reflected. That is, by examining the behavior of light and understanding how to maximize it, it would be possible to create a lamp that not only shone brighter and farther, but theoretically could do so with less oil. In turning this theory into reality, Fresnel challenged not only the status quo in lighthouse operations; he grounded his approach within the contemporaneous debates then raging within scientific circles about the nature of light.

Fresnel’s suggestion that light possessed wave-like properties kept him firmly on the outside of received wisdom. He was not alone in his approach to light, but his careful studies and experiments demonstrating wave properties of light before the scientific elite of France made him a hero of the scientific avant-garde challenging the entrenched authorities in the debates about the nature of light. More importantly for Fresnel, the experiments demonstrated the theoretical possibility of his approach to new lighthouse technologies: the secret was in the lens, not the light source.

Fresnel’s design, based as it was on mathematical precision, required massive lenses carefully constructed through the ordering and placement of each individual piece of glass. Defects in the glass or parts placed at a wrong angle would lead to a scattering of the light and a loss of the focusing power that made the lens work. The craftsmanship required for lens construction resulted not only from the nature of the precision needed for the glass but also from the effort required to cut glass of this size. The introduction of steam power meant lathes could operate faster, yielding more lenses and more lighthouses equipped with the Fresnel system.

Once made, these lenses needed to be installed—a feat that required transport of delicate glass parts to remote ends of the country (and eventually the world), installing them in giant towers often placed precariously on a nearly inaccessible outcropping of rocks, and into a room scores of feet in the air. And yet somehow it all worked.

New class of engineers

Levitt presents Augustin Fresnel as an unlikely hero of this era. But Fresnel was a product of a radical shift in education happening in France in the 19th century. The creation of the new elite engineering schools in France made possible this rise from obscurity to national hero. Indeed, Fresnel took part in a larger national experiment that not only focused on technical training; it also tied engineering to governance and political power. Engineering for the state was also engineering of the state, a fact embodied in the perhaps even more unlikely rise of a young ballistics and artillery engineer named Napoleon Bonaparte to ultimate power in France (and much of Europe).

France deployed its engineers across the country in the service of the state— building infrastructure, surveying, training its military. Fresnel was one of this new class emerging in France (and indeed spent most of his time overseeing the construction of roads and other infrastructure, a job he absolutely despised). Being an engineer meant the state supported you, but it also meant you supported the state. As Fresnel’s new system found its way across the country, the light stood as a glowing example of French engineering and not just the genius of Fresnel. Indeed, installation of his lighting system at expositions of engineering in Paris (following Fresnel’s early death attributed to consumption) was taken to be an example of the power of France and its new engineers.

Lighthouses and Fresnel’s lens meant more than just safety for sailors; they constituted physical symbols of the expansion of commerce. The loss of a ship of happy travelers would have been tragic indeed. But the loss of a commercial fleet was expensive and disruptive to the national economy. Lighting the coast did not happen all at once; the lighthouse commission in France set priorities based on safety and strategic importance of the ports. Once the coast was lit (a massive and thorough undertaking by the French government, overseen by its engineers), lighthouses with Fresnel lenses began appearing in more remote—but equally strategic— locations: Corsica, Algiers, and Gibraltar among them. Lighthouses equipped with Fresnel lenses became part of commercial infrastructure.

Though Levitt spends much time documenting the resistance by some people and groups in the United States, when the lenses did arrive, they, too, were prioritized by their commercial importance. When gold was found in California at the close of the 1840s, it took a mere two years for San Francisco to boast new lighthouses using Fresnel lenses—an amazing feat given the distance from the center of manufacturing in France and the demand for lenses at the time.

In a telling tale of the strategic importance of the lighthouses and the power of these lenses, Levitt documents the efforts taken by the Confederacy at the start of the U.S. Civil War to impose a blackout along the southern coast. Remarkable, however, is the unbelievable care taken by state authorities (and even raiding parties) at the onset of war to carefully dismantle the lenses for safekeeping in hidden locations. Although some lighthouses saw their hardware simply smashed into a thousand unusable bits, much of the hardware remained undamaged (if far from its original location) and found its way back to Federal authorities at the end of the war.

Lessons for today

In her telling of the historical emergence and evolution of the modern lighthouse, Levitt digs deep into the technical construction of the first lenses and the methodical placement of lamps as they began to dot the coasts of empires big and small. But for all of the detailed historical description that populates her careful depiction of the Fresnel lens and its production in the 19th century, the book lacks a compelling narrative or even larger context within which this feat can be fully appreciated. Despite that absence (or perhaps in lieu of Levitt’s efforts), it is possible to draw some ideas from the book that may compel further conversation.

To fully understand what happened as these events unfolded and why it was so amazing, it is necessary to understand the scientific debates and technological challenges, as well as the pressing social and political and economic needs, of the time. To focus on any one of these elements without the others is to miss the much bigger picture this story is trying to tell.

The lens was just one part of a larger system; and in this regard the lighthouse is not a singular object, but part of a larger infrastructure of the state. When viewed from this perspective, it is easier to understand why France and the United States took such different approaches to the installation of these new technologies. In fact, it is not only easier, but essential for taking one of the main points from this book. In the largely centralized and technocratic state of France, the institutionalization of new lighthouses equipped with Fresnel lenses followed a “rational plan.” In the United States, factors such as the role of open markets, the status of scientists and engineers, and the reluctance to use (or opposition to) federal funding of large state projects left efforts to update the then-current system stuck in a bureaucratic trap and with inadequate funding.

Sound familiar? It should—and that is one of the main lessons of this book. Just think about the debates in the United States today over nuclear power, funding of research, and the role of the university versus the corporation as engines of innovation. The landscape today did not suddenly appear. Treating this terrain of comingled science, technology, politics, and cultural identity with more attention could go a long way toward helping policymakers and stakeholders to appreciate the unique character of these systems. And a better understanding of how these systems came to be can go a long way toward helping us to create alternative possibilities for moving forward.

In all, A Short Bright Flash is a wonderful reminder of just how much effort goes into the construction of the nation’s largely invisible (and crumbling) infrastructure. Perhaps more discussions of this sort might yield a deeper appreciation for the efforts that need to be made to build and maintain an infrastructure for the 21st century.


Jody A. Roberts () is director of the Center for Contemporary History and Policy at the Chemical Heritage Foundation in Philadelphia, Pennsylvania.

From the Hill – Winter 2014

Congress passes budget deal, launches frenetic appropriations activity

After a contentious few months that saw a two-week government shutdown, a narrowly-averted debt crisis, and continuing politicking over the size and shape of federal expenditures and deficits, Congress has begun to tentatively dip its collective toe in the waters of compromise. The October 17 continuing resolution that ended the shutdown and ensured government would remain open through January 15, 2014, also established a conference committee to bridge the large gap between the House and Senate budgets. That committee, led by Rep. Paul Ryan (R-WI) and Sen. Patty Murray (DWA), managed to find common ground and reach a limited deal on December 10. The House approved the deal by a 332-94 margin, and the Senate approved it with 64-36 vote.

For the science community, the key part of the deal is its provisions on discretionary spending. The Ryan/Murray deal would establish discretionary spending targets of $1.012 trillion in 2014 (a nominal increase of about 2.6% above 2013) and $1.14 trillion in 2015. For 2014, this would mean about a $45 billion increase above sequester-level spending, or a rollback of about half the cuts required under sequestration, split between defense and nondefense. The rollback for 2015 is a bit less ambitious: Discretionary spending would rise by only about $19 billion above sequester-level spending. This means about 75% of the spending reduction required under sequestration would remain in effect.

The deal addresses only overall spending targets, not the budgets of individual agencies, but according to recent AAAS estimates, it could serve to boost R&D by as much as $8 billion or more over the next two years above sequester levels. Although it is a welcome development that eases the strain of sequester on science and innovation budgets, the deal does not address the big issues such as taxes and entitlement reform that are driving the deficit debate, and it leaves in place most spending reductions under sequestration through 2021. If the sequestration budget limits remain in effect, it will likely mean tens of billions of dollars in lost R&D funding and the continued decline in federal R&D as a share of the economy.

The focus now shifts back to appropriators, who will have a limited time to complete the work on FY 2014 appropriations started months ago. No doubt, at least some science agencies will see a funding increase above sequester levels as a result of the deal, but the size, shape, and focus of appropriations is yet to be determined. It is possible that appropriators, by choice or necessity, will resort to passing another full-year continuing resolution like that passed for FY 2014. Such a step would finalize funding but would also somewhat hinder agencies’ ability to start new programs or make changes to existing ones.

The appropriators have only 4 weeks to work out the details. Sen. Barbara Mikulski (D-MD), chair of the Senate Appropriations Committee, and Rep. Harold Rogers (R-KY), chair of the House Appropriations Committee, will lead the effort. The current deal certainly does not mean that pressure to trim many programs will ease. For example, Sen. Thad Cochran (R-MS), the second-ranking Republican on the Senate Appropriations Committee, is contending with a primary challenge from Chris McDaniel, who is backed by the fiscally conservative Club for Growth.

House, Senate release COMPETES Act discussion drafts

The America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act, or the America COMPETES Act (P.L. 110-69), was first signed in 2007 by President George W. Bush. The legislation was intended “to invest in innovation through research and development, and to improve competitiveness of the United States.” The bill was reauthorized in 2010 (H.R. 5116) and is up for reauthorization again.

Currently, there are three drafts circulating Capitol Hill. The House Democrats released a draft America COMPETES Reauthorization Act in late October, with a revision in December. The House Republicans are considering two separate discussion drafts, which were both released in early November: The Enabling Innovation for Science, Technology, and Energy in America (EINSTEIN) Act, which addresses only the Department of Energy’s Office of Science, and the Frontiers in Research, Science, and Technology (FIRST) Act, which deals with the National Science Foundation (NSF), the Office of Science & Technology Policy, the National Institute of Standards and Technology, and science, technology, engineering, and math education. It is important to emphasize that these are discussion drafts only; they have not been formally introduced, and all are subject to change.

Although each of these bills aims to improve U.S. competitiveness and innovation, their content varies and it is difficult to draw comparisons. The House bills crafted by the Republicans of the Science, Space, and Technology Committee do not include funding levels for federal agencies, whereas the Democrats’ draft would authorize five-year funding increases (over sequestration levels), averaging nearly 5% per year, before adjusting for inflation.

The FIRST Act generated the most attention from the scientific community, eliciting concern about a few of the provisions included. For example, the bill would require NSF to produce written justification for accepted grant applications indicating how they satisfy one or more goals outlined in the draft (e.g., national defense), and the justification would have to be publicly available before the grant is awarded.

Furthermore, it would require that researchers certify the veracity of their research results if published and would ban primary investigators from receiving federal funding for 10 years if they are found guilty of misconduct.

The New Democrat Coalition, a group of moderate, pro-growth Democratic representatives, did not release a bill, but it has published a “reauthorization agenda” for America COMPETES that outlines principles for improving U.S. innovation. These include supporting basic research, providing a stable source of funding for R&D, supporting small businesses, and expanding public-private partnerships, among others.

Congress in brief

• On December 5, the House passed the Innovation Act (H.R. 3309) by an overwhelming vote of 325-91. Six higher education groups co-signed a statement articulating their opposition to portions of the House bill, expressing concern that changes to the litigant fee system could have a negative impact on researchers’ collaborative efforts.

• On November 18, the House passed the Digital Accountability and Transparency Act (H.R. 2061) by a vote of 388-1. Earlier in November, the Senate Homeland Security and Government Affairs Committee passed its version of the DATA Act (S. 994), which may now be sent to the Senate floor for a vote. Both bills seek to improve transparency of federal spending on contracts, loans, and grants by requiring the establishment of government-wide data standards and reporting requirements for data posted to USASpending.gov. However, the House bill, unlike the Senate bill, includes a provision that essentially codifies Office of Management and Budget (OMB) federal travel restrictions, with additional reporting requirements. As with the OMB rules, for example, it would reduce an agency’s total travel budget by 30% below its FY 2010 spending levels. However, unlike OMB, the House bill would also place a cap of 50 federal employees for a single international conference. In addition, it would require all federal employees participating in conferences to make public all conference materials associated with their attendance (e.g., slides, oral remarks, and video recordings).

• On November 21 Rep. Chaka Fattah (D-PA) introduced the America’s FOCUS Act, which would establish a new Treasury fund from corporate fines, penalties, and settlements. A third of the fund would go toward investments in science, technology, math, and engineering education and youth mentoring, and another third would go to the National Institutes of Health. The Justice Department has reportedly collected billions in corporate fines and settlements, and the money goes to the General Fund of the Treasury when not designated for a specific purpose. Says Fattah: “This bill presents an opportunity to intentionally direct sums from settlements between the federal government and corporate and financial institutions to programs that can improve the life chances of Americans and allow our country to maintain its economic competitiveness.”

• In November Congress sent a bill to the president that would lift a congressionally mandated $30-million cap on how much the National Institutes of Health could spend to retire and care for retired federal research chimpanzees. NIH will now be able to continue supporting chimps at Chimp Haven, the Louisiana-based federal chimp sanctuary, and move forward with its plan to retire all but 50 of its 360 research chimps.

• On October 29 the House Oversight and Government Reform Committee approved the Grant Reform and New Transparency (GRANT) Act (HR 3316), which seeks to enhance transparency in the federal grant process. AAAS and other scientific and university groups have expressed their misgivings about provisions in a previous version of the bill that would mandate the public disclosure of full grant applications and peer reviewers. In an interview with Science, sponsor Rep. James Lankford (R-OK) indicated he is open to making changes to the bill in response to the concerns of the research community.

Agency updates

• On December 5, the White House released a presidential memorandum that increases the amount of renewable energy that each federal agency is required to use. The memorandum states that “20 percent of the total amount of electric energy consumed by each agency…shall be renewable energy” by FY 2020. The memorandum also requires that federal agencies update their building performance and energy management practices in order to better manage energy consumption.

• The White House recently announced a new $100-million initiative to find a cure for HIV. The project will not require new funds; rather, the money will be re-directed from existing funds, such as expiring research grants.

• The Department of Homeland Security is inviting input into the development of the National Critical Infrastructure Security and Resilience Research and Development Plan. For the purpose of the plan, critical infrastructure includes both cyber and physical components, systems, and networks for the different sectors outlined in the presidential policy directive (PPD-21) on Critical Infrastructure Security and Resilience. That directive called for the development of this R&D plan by February 2015. The call for input includes specific questions pertaining to sector interdependencies; articulation with state, local, and other non-federal issues and responsibilities; prioritization of research areas; and essential elements for the plan.

• On November 21, the Obama Administration outlined its strategy for maintaining what it describes as the U.S. global leadership role in spaceflight and exploration in the new National Space Transportation Policy. The policy reinforces several previously stated administration priorities; however, it differs from prior versions by placing a strong emphasis on accelerating development of commercially built and operated rockets. In order to achieve this, the policy calls on federal agencies to continue supporting the development of private U.S. spaceships to transport astronauts to and from low-Earth orbit, and directs the National Aeronautics and Space Administration to continue working toward a heavy-lift rocket for further travel. Overall, the policy reflects congressional desire to boost commercial-space ventures and protect funding for longer-term, deep-space exploration plans.

• The Obama Administration issued a new rule on November 8 to require health insurers to handle copays, deductibles, and benefits for mental health conditions and substance abuse in the same way that they do physical ailments. The rule “breaks down barriers that stand in the way of treatment and recovery services for millions of Americans,” said Health and Human Services Secretary Kathleen Sebelius. “Building on these rules, the Affordable Care Act is expanding mental health and substance use disorder benefits and parity protections to 62 million Americans.” The long-awaited rule implements the Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008. The Obama Administration emphasized that issuing the rule is part of its approach to reducing gun violence.

• The Environmental Protection Agency has released draft climate change adaptation implementation plans for each of its ten regions and seven national programs. Adaptation will involve anticipating and planning for changes in climate and incorporating considerations of climate change into many of the agency’s programs, policies, rules, and operations to ensure they are effective under changing climatic conditions. Public comments are due January 3, 2014.

• On November 1, President Barack Obama issued an executive order to help federal and state agencies prepare infrastructure to withstand extreme weather events caused by climate change. The order sets up a new interagency Council on Climate Preparedness and Resilience, which replaces the task force set up in 2009, and establishes a Climate Preparedness and Resilience Task Force comprised of federal, state, and local officials who will make recommendations to “remove barriers, create incentives, and otherwise modernize federal programs to encourage investments, practices, and partnerships that facilitate increased resilience to climate impacts, including those associated with extreme weather.”

• The Food and Drug Administration (FDA) has announced two new actions to prevent shortages in drugs used to treat patients. The first is the development of a strategic plan that describes upcoming agency actions to improve responses to potential shortages and addresses manufacturing and quality issues that are often at the root of drug shortages. The second is a proposed rule, with a public comment period of 60 days, requiring manufacturers of medically important prescription drugs to notify the FDA of events likely to disrupt drug supply.

• On October 25, the Office of Science and Technology Policy (OSTP) released a Biological Incident Response and Recovery Science and Technology Roadmap to help ensure that decisionmakers and first responders are equipped with the tools necessary to respond to and recover from a major biological incident. The Roadmap aims to strengthen the national response by categorizing key scientific gaps, identifying specific technological solutions, and prioritizing research activities to enable the government to make decisions more effectively. The Roadmap, which was developed by the interagency Biological Response and Recovery Science and Technology Working Group under the National Science and Technology Council’s Committee on Homeland and National Security, complements the National Biosurveillance Science and Technology Roadmap that was published in June 2013.

My Brain is My Inkstand

Drawing as Thinking and Process

My Brain Is in My Inkstand: Drawing as Thinking and Process is an exhibition debuting at the Cranbrook Art Museum, Bloomfield Hills, Michigan, that brings together 22 artists from around the world to redefine the notion of drawing as a thinking process in the arts and sciences alike. Sketches on paper are the first materialized traces of an idea, but they are also an instrument that makes a meandering thought concrete.

Inspired by the accompanying exhibition The Islands of Benoît Mandelbrot, the exhibition uses multiple sources to show how drawings reveal the interdependency of mark-making and thinking. It brings together artists and scientists, basketball coaches and skateboarders, biologists and Native Americans to show how tracing lines is a prerequisite for all mental activity.

Featured artists include David Bowen, John Cage, Stanley A. Cain, Oron Catts, Benjamin Forster, Front Design, Nikolaus Gansterer, legendary basketball coach Phil Jackson, Patricia Johanson, Sol LeWitt, Mark Lombardi, Tony Orrico, Tristan Perich, Robin Rhode, Eero Saarinen, Ruth Adler Schnee, Carolee Schneemann, Chemi Rosado Seijo, Corrie Van Sice, Jorinde Voigt, Ionat Zurr, and many more. It also integrates work from the collections of the Cranbrook Institute of Science and the Cranbrook Center for Collections and Research.

A live performance by artist Tony Orrico took place on November 16 and 17 during which he explored his own body and its physical limits as he created a drawing that remains in the museum for the duration of the exhibition. Artist and composer Tristan Perich installed a live Machine Drawing that uses mechanics and code to cumulatively etch markings across a museum wall.

The title of the exhibition derives from a quotation by philosopher, mathematician and scientist Charles Sanders Peirce, whose work involving the over- and under-laying of mathematical formulas with pictographic drawings is presented for the first time. The exhibition is on view November 16, 2013, through March 30, 2014. An exhibition catalog My Brain Is in My Inkstand: Drawing as Thinking and Process, edited by Nina Samuel and Gregory Wittkopp and published by Cranbrook Art Museum, is available.