Author Archives: issues

An Archeology of Knowledge

Mark Dion

7-01

In recent years, much thought and research has been devoted to the visualization of information and “big data.” This has fostered more interactions with artists in an attempt to uncover innovative and creative ways of presenting and interpreting complex information.

7-02

How meaning and knowledge are structured and how they are communicated through objects has long interested artist Mark Dion. In Dion’s installations, everyday objects and artifacts are elevated to an iconic status by association with other objects. Visual meaning is established in much the same way that a natural history collection might reveal information about the specimens it contains. Indeed, Dion’s work harks back to the 17th century “cabinet of curiosity,” where objects were collected to consider their meaning in relationship to other artifacts.

 

“An Archaeology of Knowledge”, a permanent art installation for the Brody Learning Commons, the Sheridan Libraries & University Museums, The Johns Hopkins University. Below is a selection of artifacts from the cases and drawers:

(a) “Big Road to Lake Ahmic,” 1921, etching on the underside of a tree fungus by Max Brödel (1870-1941), first director of the Department of Art as Applied to Medicine; (b) Glaucoma demonstration model, ca. 1970s; (c) Nurse dolls, undated; (d) Medical field kit, 20th century; (e) Dog’s skull, undated; (f) Collection of miniature books, 16th–20th centuries. Photo by John Dean.

8
9-02

In 2011, Johns Hopkins University (JHU) commissioned Dion to create an installation for their Brody Learning Commons, Sheridan Libraries, and university museums in Baltimore that would convey the institution’s diverse and expansive history. The installation featured here, titled An Archeology of Knowledge, sought to document and communicate information regarding hundreds of historic artifacts, works of art, and scientific instruments from across the collections and units of JHU and Johns Hopkins Medical Institutions. Elizabeth Rodini, director of the Program in Museums and Society at JHU, wrote that this work, “reveals the layers of meaning embedded in an academic culture….Although some of us…work regularly with objects, even we often fail to consider how these objects are accumulated and brought into meaningful assemblages.”

9-01

Left: Mark Dion Concept drawing for “An Archaeology of Knowledge”, 2011. Courtesy Mark Dion Studio, New York, NY.

 

Some of the artifacts were gathered from intentional collections and archives from across JHU’s disciplinary divisions. Other objects were gathered by an extensive search by the artist, curator Jackie O’Regan, and other collaborators through storage vaults, attics, broom closets, and basements as well as encounters with individuals on campus who collected and even hoarded the “stuff” of knowledge that make up the material fabric of JHU. Even the cabinets themselves were a part of JHU history, repurposed from the Roseman Laboratory.

Dion writes, “This artwork hearkens back to the infancy of our culture’s collaborations across the arts and sciences, as each artifact takes on a more poetic, subjective, and perhaps allegorical meaning, all the while maintaining its original status as a tool for learning…. An Archaeology of Knowledge provides us with an awesome, expansive visual impression that evokes wonder, stimulates curiosity, and produces knowledge through a direct and variegated encounter with the physical world.” Dion’s work reminds us of the power of objects to convey meaning and to preserve history.

Below is a selection of artifacts from the cases and drawers, including: an early X-ray tube, a 16th century Mesoamerican stone face plaque, a 1st century Roman pedestal with inscription, early 20th century lacrosse balls, an anesthesia kit, and assorted pressure gauges and light bulbs.

10

Below: pipet bulbs, diagnostic eyeglasses, an early 20th century X-ray tube, and a late 19th century egg collection.

Drawer photos by John Dean.

11-01
11-02

A selection of artifacts from the cases and drawers, including: Below: a late 18th century English linen press, an early 20th century practice clavier, and an 1832 portrait of “Mrs. Samuel Hopkins” by Alfred jab Miller that was commissioned by her son Johns Hopkins.

12

Below: various trophies and awards.

13-01
13-02

Mark Dion has exhibited his artwork internationally including at the Tate Gallery, London, and the Museum of Modern Art, New York. He is featured in the PBS series Art: 21. He teaches in the Visual Arts Department of Columbia University.

—JD Talasek

Images courtesy of the artist and Johns Hopkins University. Will Kirk/Homewood Photography, unless otherwise noted.

Archives

96
Ruben Nieto Thor Came Down to Help Mighty Mouse Against Magneto. Oil on canvas. 32 x 48 inches. 2013

Artist Ruben Nieto grew up in Veracruz, Mexico, reading U.S. comic books, a memory that plays an important role in his creative process today. His paintings recast the formal visual elements of comic books with a strong influence of Abstract Expressionism and Pop Art. Using computer software to transform and alter the structure of the original comic book drawings, Nieto proceeds to make oil paintings based on the new decontextualized imagery. In his own words, “Forms and shapes coincide and drift on planes of varying depth, resulting in comic abstractions with a contemporary ‘pop’ look.”

Nieto received his Master of Fine Arts degree in Arts and Technology from the University of Texas at Dallas in 2008 and has since exhibited throughout the United States and Mexico.

Image courtesy of the artist.

Profiteering or pragmatism?

Windfall: The Booming Business of Global Warming

by McKenzie Funk. New York, NY: Penguin Press, 2014, 310 pp.

Jason Lloyd

In the epilogue of his book, Windfall: The Booming Business of Global Warming, McKenzie Funk finally outlines an argument that had thrummed away in the background of the preceding twelve chapters, present but muffled under globe-trotting reportage and profiles of men seeking profit on a warming planet. It’s not a groundbreaking argument, but it provides a sense of Funk’s framing. “The hardest truth about climate change is that it is not equally bad for everyone,” he writes. “Some people—the rich, the northern—will find ways to thrive while others cannot … The imbalance between rich and north and poor and south—inherited from history and geography, accelerated by warming—is becoming even more entrenched.”

91

The phrasing here confuses an important distinction. Is it climate change that is exacerbating global inequities? Or is it our response to climate change? To varying degrees it is both, of course, but differentiating them is necessary because our response will have significantly greater consequences for vulnerable populations than climate change itself. Funk largely conflates the two because he views climate change and global inequalities as stemming from the same source: “The people most responsible for historic greenhouse gas emissions are also the most likely to succeed in this new reality and the least likely to feel a mortal threat from continued warming.” This is as facile a perspective as the claim on the previous page that climate change is “essentially a problem of basic physics: Add carbon, get heat.”

The problem is not that these statements are untrue. It’s that they are so simplistic that they obscure any effective way to deal with the enormous complexity of climate change and inequality. To be fair, Funk notes elsewhere that how we respond to climate change may magnify existing power and economic imbalances. But he means the response that is the subject of Windfall: people in affluent countries discovering opportunities to profit off the impacts of climate change. It does not seem to have occurred to him that the conventional climate strategy—to mitigate rather than adapt, to minimize energy consumption rather than innovate, to inhibit fossil fuel use in even the poorest countries—may entrench global inequalities much more effectively than petroleum exploration in the Arctic or genetically modified mosquitos.

It is tempting to agree with Funk’s framing. There is “a more perfect moral clarity” in the idea that the rich world must cease carbon dioxide emissions for the good of all or risk an environmental disaster that will burden the poor the most, and that those seeking financial gain in this man-made catastrophe are simply profiteers. But it is a clarity premised on an unfounded faith in our current ability to radically cut carbon emissions, and it ignores some destabilizing questions: Where does China fall in this schema, for example? What about Greenland, which as Funk notes, stands to hugely gain from climate change without having contributed anything to the problem? Don’t drought-resistant crops with higher, more predictable yields provide benefit both seed companies and poor farmers?

Funk elides these questions and stresses that he is not identifying bad guys but illuminating “the landscape in which they live,” by which he means a global society consumed by “techno-lust and hyper-individualism, conflation of growth with progress, [and] unflagging faith in unfettered markets.” If this is what he sees when he looks out at the global landscape, he is using an extraordinarily narrow beam for illumination.

Fun as it is to watch Funk puncture the petty vanities of these men, mostly by simply quoting them, it is impossible to grasp the bigger picture from these chapters.

The people that Funk spotlights in this landscape are the hedge funders, entrepreneurs, and other businessmen (apparently no women are profiting from climate change) who are finding ways to thrive on the real, perceived, and anticipated effects of global warming. These effects are divided into three categories: melt, drought, and deluge. There are upsides—for some, at least—to all three. A melting Arctic means previously inaccessible mineral and petroleum deposits become exploitable, and newly ice-free shipping lanes benefit global trade. Drought offers opportunities for investing in water rights, water being a commodity that will likely increase in price as it becomes scarcer in places like the U.S. West and Australia. And rising sea levels allow Dutch engineers to sell their expertise in water management to low-lying communities worldwide.

The issues raised in the two best chapters, about private firefighting services in California and an investor’s purchase of thousands of acres of farmland in newly independent South Sudan, are not new and arguably have less to do with climate change than with social and economic dynamics. But these chapters stand out because of the men profiled in them. Funk has a terrific eye for the vanities of a certain type of person: the good old boy who believes himself a straight-talker, rejecting social niceties and political correctness to tell it how it is, but is mostly full of hot air, pettiness, and self-interest.

The wasabi-pea-munching Chief Sam DiGiovanna, for example, leads a team of for-profit firefighters employed by insurance giant AIG to protect homes from forest fires. He calls media outlets to see if they’d like to interview him on his way to fight fires in affluent neighborhoods in the San Fernando Valley. (Their protection efforts are mostly useless, as it turns out, because of a combination of incompetence on the part of his Oregon-based dispatchers and the effectiveness of public firefighters.) It is genuinely appalling to read that because Chief Sam’s team mimics public firefighters—uniforms, red fire-emblazoned SUVs with sirens, pump trucks—a neighbor of one of their clients mistakenly believes they are in the neighborhood to fight the blaze, not protect individual client homes. As she points out where the team can access the fire, Chief Sam lamely stands around and says that more resources are coming, unwilling to abandon the illusion that they are acting in the public interest.

Funk travels with investor Phil Heilberg to South Sudan to finalize Heilberg’s leasing of a million acres of the country’s farmland, a deal that would make him one of the largest private landholders in Africa. Attempting to acquire the signatures of Sudanese officials in order to legitimate his land deal and pacify investors in the scheme, Heilberg, who compares himself to Ayn Rand’s protagonists and witlessly psychoanalyzes the warlords who keep blowing him off, seems mostly out of his element. He leaves South Sudan amid the chaos of its fight for independence without getting his signatures. Other nations pursuing land deals seem to have had more luck; countries ranging from India to Qatar have leased or purchased vast tracts of farmland in poorer countries.

Fun as it is to watch Funk puncture the petty vanities of these men, mostly by simply quoting them, it is impossible to grasp the bigger picture from these chapters. At one point Funk compares public firefighting to mitigation, or “cutting emissions for the good of all,” and Chief Sam’s private firefighting to adaptation efforts in which “individual cities or countries endeavor to protect their own patches.” (The failure of a mitigation-dominated approach to cutting global emissions goes unmentioned.) A libertarian abandonment of public goods such as firefighting would indeed be calamitous, but we don’t seem to be in any danger of that occurring. If Chief Sam’s outfit is anything more than an apparently ineffectual experiment on the part of insurance companies, Funk does not say what it is.

The same is true of his Wall Street farmland investor. Heilberg appears feckless rather than indicative of some trend of colonizing climate profiteers. Funk illustrates why working with warlords is a bad idea from both a moral and business perspective, but he never articulates what the effect of Heilberg’s farming plan, if successful, would be. Funk ominously notes that private militias had ravaged South Sudan during the civil war of the 1990s, but he doesn’t make the connection to current foreign land purchases. Heilberg, for his part, planned to farm his land and sell crops in Sudan before selling the food abroad. Nor is it obvious what countries like China or Egypt plan to do with the land they have acquired in places such as Sudan and Ethiopia, or how leasing farmland is different from other forms of foreign direct investment.

Furthermore, it’s sometimes difficult to figure out who, exactly, is profiting. Funk devotes half a chapter to Nigeria’s construction of a “Great Green Wall,” a line of trees intended to slow desertification in the country. But desertification results mostly from unsustainable agricultural methods. How climate change may impact the process is unknown, especially since climate models for sub-Saharan Africa are notably variable. Few people seem to think that the green wall will slow the Sahara’s expansion. The profit-generating capacity of a tree-planting scheme dominated by a Japanese spiritual group (one of the weirder details of the project) is left unexplained.

Geoengineering is another example. Although Intellectual Ventures (IV), an investment firm headed by Microsoft entrepreneur, cookbook writer, and alleged patent troll Nathan Myhrvold, may hold patents on speculative geoengineering technologies, how the company could profit from them is not clear. Distasteful as IV’s practices may be, is it necessarily a bad thing that some entities might profit from technologies that allow people to adapt and thrive in a climate-changed world, whether through solar radiation management, improved mosquito control, or better seawalls?

Funk clearly sees this idea and what he calls “techno-fixes” as opportunism and as relinquishing our duty to mitigate climate change through significantly cutting carbon emissions or consumption. Despite peevish asides such as the fact that the “Gates Foundation has notably spent not a penny on helping the world cut carbon emissions” (quite possibly because emissions reductions have little to do with helping poor people), Funk does not outline what radical emissions reductions would entail.

Presumably, though, an effective approach to lowering carbon emissions requires both the public and private sectors, and private sector involvement means that someone sees an opportunity to profit. The notion that corporations will respond to incentives that erode their bottom lines—or, for that matter, that governments will enact tax or energy policies to the detriment of their citizens—does not correspond to what we have learned from thirty years of failure to adequately address climate change and reduce carbon emissions. The task, then, is to rethink our strategy for transitioning to a low-carbon global society and, as importantly, equitably adapting to an unavoidably warming climate. Where are the opportunities for achieving these goals, and how do we design our strategies to benefit as many people as possible? Stuck in the conventional climate framework, Windfall does not provide any useful answers.

Funk adopts the position that he is unearthing some uncomfortable truths: “Environmental campaigners shy away from the fact that some people will see upsides to climate change.” Environmental campaigners who have chosen to ignore the blindingly obvious may indeed not want to acknowledge that climate change will produce winners and losers. But for everyone else, Funk provides a narrative of familiar villains—Royal Dutch Shell, Monsanto, Wall Street bankers, African war lords, genetically modified organisms. To those firmly entrenched in a particular view of the world, Windfall is the validating story of profit-seekers in the rich world that have brought us to the brink of environmental catastrophe and will now find a way to make money off it. If only it was this straightforward.

It is not just the rapaciousness of corporations, the selfish behavior of billions of unthinking consumers, or even the resource-intensive economies of what neo-Marxists always optimistically call “late capitalism” that is ushering in the Anthropocene. Climate change results from the fact that every facet of modern life—the necessities and comforts the vast majority of us enjoy, demand, or aspire to—contributes to the emissions that are warming the planet. If we are going to manage this condition in a pragmatic and ethical way, it will take a great deal of imagination to find the opportunities that climate change presents, including financial opportunities, for making the world a more prosperous, more resilient, and more equitable place.

Jason Lloyd (jason.lloyd@asu.edu) is a project coordinator at Arizona State University’s Consortium for Science, Policy, and Outcomes in Washington, DC.

Imagining the Future City

RIDER W. FOLEY
DARREN PETRUCCI
ARNIM WIEK

A rich blend of engaging narrative and rigorous analysis can provide decisionmakers with the various perspectives they need when making choices with long-range consequences for cities around the world.

An ashen sky gives way to streaks of magenta and lilac across the Phoenix cityscape in 2050. L’yan, one of millions of late-night Creators, walks slowly through the fields of grass growing in the elevated honeycomb transportation network on her way back from the late-night block party. L’yan has only a short trip to her pad in downtown Phoenix. She, along with 10,000,000 fellow Creators, has just beaten the challenge posted on the PATHWAY (Privileged Access-The Hacker WAY) challenge board. L’yan shivers, a cool breeze and the feeling of success washing over her. She had gained PATHWAY access during her ninth year in the online Academy of Critically Adaptive trans-Disciplinary Engineering, Mathematics, Informatics, & Arts (ACADEMIA). She dropped out after achieving Creator status. Who needs a doctorate if you have access to PATHWAY challenges? Research funds are no longer tied up in disciplinary colleges and universities. In Phoenix, as in many innovation centers around the world, social stratification is not any longer determined by race, gender, or family wealth; instead, it is based on each person’s skills in problem-solving and adaptive learning, their ability to construct and shape materials, and to write and decipher code. Phoenix embraces the ideals of individual freedom and creativity, and amended zoning in 2035 to allow pads (building sites) for Creators to build towers. Pads are the basis of innovation and are the foundation blocks for the complex network of interconnected corridors that hover above the aging city streets. Today, in 2050, the non-Creators, the squares, live in relics, detached houses, off-pad in the old (2010 era) suburbs at the periphery of the city center.

Science fiction uses personal narratives and vivid images to create immersive experiences for the audience. Scientific scenarios, on the other hand, most often rely on predictive models that capture the key variables of the system being projected into the future. These two forms of foresight—and the people who practice them—typically don’t engage with one another, but they should.

Scientific scenarios are typically illustrated by an array of lines on a graph representing a range of possible futures; for example, possible changes in greenhouse gas emissions and atmospheric temperatures over the next several decades. Although such a spectrum of lines may reflect the results of sophisticated climate models, it is unlikely to communicate the information decisionmakers need for strategizing and planning for the future. Even the most sophisticated models are simplifications of the forces influencing future outcomes. They present abstract findings, disconnected from local cultural, economic, or environmental conditions. A limited number of continuous lines on a graph also communicate a sense of control and order, suggesting that today’s choices lead to predictable outcomes.

Science fiction stories, in contrast, can use rich and complex narratives to envision scenarios that are tangible and feel “real.” Yet science fiction also has its obvious limits as a foresight tool. To be effective, it must be driven by narrative, not by science or the concerns of policymakers. Scenarios constructed through collaborations that draw from the strengths of science and science fiction can help decisionmakers and citizens envision, reflect, and plan for the future. Such rich and embedded scenarios can reveal assumptions, insights, and questions about societal values. They can explore a society’s dependence on technology, its attitudes about the market, or its capacity to effect social change through policy choices. Scenarios can challenge linear cause-effect thinking or assumptions about rigid path dependencies. People are often ready for more complexity and have a greater appreciation of the intertwined forces shaping society after engaging with such scenarios. To illustrate this, we describe a recent project we directed aimed at helping decisionmakers think through the implications of emerging nanoscale science, technology, and innovation for cities.

Constructing scenarios

Sustainability science develops solution options for complex problems with social, economic, and environmental elements, reaching from local to global scales. Design thinking synthesizes information from disparate sources to arrive at design concepts that help solve such complex problems and advance human aspirations, from the scale of the body to the scale of the city. In this project we used both sustainability science and design thinking to map, model, and visualize alternative socio-technical futures that respond to the mounting sustainability challenges facing Phoenix, Arizona.

Currently, science policy in the United States and across the globe is justifying significant investments in nanotechnology by promising, for example, improved public health, water quality, food productivity, public safety, and transportation efficiency. In Phoenix, regional efforts are under way in each of these sectors. The nanotechnologies envisioned by researchers, investors, and entrepreneurs promise to reshape the buildings, infrastructures, and networks that affect the lives of the city’s residents. Furthermore, Phoenix, like many urban centers, is committed to diversifying the regional economy through investments in high-tech clusters and recruiting research-intensive companies. It is already home to companies such as Intel, Honeywell, Orbital Sciences, and Translational Genomics. These companies promise jobs, economic growth, and the benefits of novel technologies to make life easier, not only for Phoenix residents but for consumers everywhere.

We consulted with diverse stakeholders including “promoters” (such as entrepreneurs, funding agencies, staffers, and consultants), less enthusiastic “cautious optimists” (members of the media, city officials, and investors), and downright “skeptics” (staff at social justice organizations, regulatory agencies, and insurance companies). These urban stakeholders have rival objectives and values that highlight the interwoven and competing interests affecting the city’s social, technological, and environmental characteristics. Repeated interactions between the research team and stakeholders led to relationships that were maintained for the duration of the two-year study.

A mixed method to foresight

In collaboration with these diverse stakeholders, the scenarios explore the following questions: In Phoenix in 2050, who is doing what in nanotechnology innovation, why are they doing it, and with what outcomes (intended and unintended)? How conducive are different models of nanotechnology innovation to mitigating the sustainability challenges Phoenix faces in 2050? We used 2050 as the reference year because it is beyond the near-term planning horizon, yet still within the horizon of responsibility to today’s children.

In the initial stages of research, we collected elements for the scenarios directly from stakeholders through interviews, workshops, local media reports, and public events, and from documents published by academic, industry, government, and nonprofit organizations. That review process yielded a set of scenario elements (variables) in four relevant domains of models of innovation, societal drivers, nanotechnology applications, and sustainability challenges.

(1) Models of innovation represent distinctly different patterns of technological change: market-pull innovation is the conventional procedure of product development and commercialization; social entrepreneurship innovation aligns the interests of private entrepreneurs with the challenges facing society through diverse public-private partnerships; closed collaboration innovation is based on public-private partnerships restricted to a limited number of elite decisionmakers; and open-source innovation leverages the skills of individuals and collectives to generate intellectual property and yet not retain its rights exclusively.

(2) Societal drivers enable and constrain people’s actions in the innovation process: entrepreneurial attitudes; public (and private) funding; academic capacities; risk-mitigating regulations (public policy) and liability protection (private activity); and capacity for civic engagement.

(3) Nanotechnology applications result from the innovation process and range from “blue sky” (very early development) to “ubiquitously available.” The applications used in our study include multifunctional surface coatings; energy production, transmission, and storage systems; urban security applications; and nano-enhanced construction materials. All applications are profiled in an online database (http://nice.asu.edu).

(4) Sustainability challenges—mitigated or aggravated through innovation processes—include economic instabilities due to boom-bust cycles of land development and consumer behavior; emerging problems with the reliability of electricity and water systems due to population shifts, aging infrastructure, and future drought conditions; overinvestment in energy- and emission-intense automobile transportation infrastructure; increasing rates of childhood obesity and other behavioral diseases; social fragmentation along socioeconomic and nationality status; and limited investments and poor performance in public education. The Phoenix region faces each of these challenges today. How (or if) they are addressed will affect the city’s future.

We vetted this set of scenario elements through interviews and a workshop that included a total of 50 experts in high-risk insurance, venture capital, media, urban economic development, regulations, patent law and technology transfer, nanoscale science and engineering, and sustainability challenges. We analyzed the consistency among all scenario elements, and generated 226,748,160 computer-based combinations of the scenario elements. Inconsistent scenarios were eliminated and a cluster analysis yielded a final set of four scenarios (based on the four innovation models). Technical descriptions summarized the key features of each scenario. Finally, a narrative was written for each scenario (such as the one for the open-source innovation scenario at the beginning of this article). Each narrative starts at sunrise to depict a day in the life of a person in Phoenix in 2050.

The narratives were used as the basis for a graduate course that we taught at Arizona State University’s Design School. Students were asked to develop urban designs from the scenario narratives. The challenge for the students was that the narratives were neither architectural design specifications nor articulations of typical design problems. One student joked, “We are working with material too small to see, in a future that doesn’t exist, at a physical scale bigger than any other design studio project.” (In contrast, the graduate design studio next door was designing a 10-story law school for an existing site in downtown Phoenix.)

Students first converted the scenario narratives into visual storyboards, from which they developed initial urban design proposals. The proposals were reviewed by a panel of experts, including engineers, real estate developers, social scientists, and community advocates. Students formulated suppositions, for example, in the social entrepreneurship innovation scenario, that boundaries between public and private property are blurred, or, in the open-source innovation scenario, that restrictive building codes are eased exclusively for Creators in exchange for the benefits offered to the city. The suppositions served as a point of departure for the final urban design proposals. Ideas poured forth throughout the process, as students generated thousands of sketches, drawings, and illustrative boards to test their urban design proposals.

Each student dedicated 60 or more hours per week to the project. In turn, the Design School offered abundant technical and social resources to enable their productivity. Students were given a budget to build their lab and create an environment suitable for the project. Every Friday they participated in group coaching led by a clinical psychologist, a faculty member at the Design School. A filmmaker worked with the students on illustrating the final urban design proposals in short videos.

By the end of the semester, the students had created four videos—one for each scenario—offering a guided tour of a nano-enhanced Phoenix in 2050. The videos were reviewed by a panel of experts, including land developers, technology specialists, architects, sustainability scholars, urban designers, and social scientists. Over the summer, a group of six students incorporated feedback from the end-of-semester review and condensed the four scenarios into two. They produced three-dimensional models and polished the final video, entitled PHX 2050 (http://vimeo.com/88092568). The 15-minute video exposes audiences to distinctly different futures of nanotechnology in the city—from drivers to impacts. It has been used in high-school classrooms in Phoenix; science policy workshops in Washington, DC; and seminars, including one hosted by the U.S. Green Building Council with professionals from the construction sector. The video sparked new conversations and stimulated people to consider, simultaneously, the social and physical elements of the city, the role of technology, and divergent future outcomes.

The nano-enhanced city of the future: Phoenix in 2050

In addition to the movie and the four “day-in-the-life” vignettes, the students prepared graphic images that visually capture the essence of the scenarios and general descriptions of the key underlying elements. Samples of each of these are provided here.

Market-driven innovation: Suppositions

“Market pull” is the dominant mode of innovation and problem-solving to meet user demands. Market mechanisms efficiently meet the demand for low-cost goods, such as personal electronics, provided by private corporations and entrepreneurs alike. Product competition affords comfort and convenience-based products that ensure the “good life.”

Citizens hope to become wealthy and famous entrepreneurs. Government funding agencies focus on small business research grants, as a means to privatize and market technologies created in university and federal labs. Venture capitalists host regional and national conferences and invite researchers, budding entrepreneurs, and program managers. These forums offer critical feedback to technology developers and funding agencies on how to get technologies closer to market before private investments are made.

Advances in nanotechnology support legacy energy and transportation infrastructure, which gain just enough efficiency to stave off the collapse of aging infrastructure. Battery efficiency allows cars to run exclusively on electric motors, yet the existing electrical power supply remains fossil dependent. Nano-enabled materials coat the glass facade and are embedded in the electrical operations in buildings.

Society is divided between the rich and the minimum wage earner, with the middle class having disappeared decades ago. Pressing urban sustainability challenges amplify stress between people, the economy, and the environment.

85

Market-driven innovation: Will the sun rise in Arizona?

Rays of sunlight break across Nancy’s bed. The window’s tinting melts away as the night’s sky transforms into a grayish-purple aurora in anticipation of morning. Nancy awakens. Another day to fight for solar energy has begun and the aroma of freshly brewed coffee greets her. She sips her coffee and reviews her notes for the upcoming 2050 Arizona Town Hall. She scoffs. These meetings have been going on for more than a half-century, since before 2010.

And where are they today? No different than 2010, maybe a notch hotter at night and water restrictions are being imposed, but the real lack of change is in the energy sector, the lifeblood of any city. The market price of solar has never quite caught up with the marginally decreasing price of nuclear, coal, and natural gas. There are a hundred reasons, a thousand little incremental changes in technology and policy that have advantaged legacy energy providers and continuously crippled the solar industry. Many point to the little-known Arizona Corporation Commission—the decision-making body that sets Renewable Energy Standards for state-regulated electrical utilities in Arizona, a state with 360 days of full sun every year. A political action group has supported candidates who have undermined the solar industry and quietly propped up the legacy energy sources relied on by the centralized utilities.

86

Closed collaboration: A world under control

Ja’Qra awakes to the morning rays gently easing their way through the blinds. The “Desert Sunrise” is programmed into the Home Intelligence System, which syncs every second with the Community Health Management system. Those systems are responsible for Ja’Qra’s health and security. The systems update the Maricopa Sheriff’s office every two seconds, ensuring almost real-time security updates. Since the Arizonians for Citizen Transparency Act came into effect in 2024, all children have been encoded with their social security numbers embedded within eighty-one discrete codons using synthetic G-A-C-T sequences. Ja’Qra validates her status as awake. Her routine is soothing. She depresses her hands in a semi-solid gel that fills the bathroom sink monitoring station. It massages her hands, lightly scrubs the skin, and applies a novel daily nail polish pattern and painlessly extracts 10 to 20 dead skin cells to verify Ja’Qra’s identity. A fully integrated personalized medicine program in Arizona requires full participation by all residents to populate the database of genetic diseases. Full citizen participation also provides the baseline health information from which illnesses can be identified as anomalies and treated in a preventative manner. Ja’Qra dutifully reviews the prescribed daily health reports and consumes the breakfast MEAL’ Medically Effective And Lovable.

Closed collaboration innovation: Suppositions

Mission-oriented government agencies, like the Department of Defense and National Institutes of Health, collaborate with private contractors to create novel technological solutions to social problems. By concentrating power in large administrative units, solutions are implemented with controlled technologies to address infrastructure, security, and public health challenges.

Citizens demand economic stability, security and universal health care. Clean water and air also garner unquestioned public support. A few privileged decisionmakers direct public funding for nanotechnology innovation. This ensures that highly educated experts in the field design technological solutions that align with each federal agency’s mission.

Future success is expected to mirror historic feats of science and engineering, exemplified by the atomic bomb and penicillin. Federal agencies react swiftly to identified threats and challenges. This has led to the containment of threats and has mitigated many stressors of urban life, the economy, and environment. Urban challenges are addressed with the orderly deployment of nanotechnology, such as ensuring universal health care by monitoring everyone’s health with real-time analytics and precise pharmacological treatments.

The city is reminiscent of Singapore—all clean and shiny with buildings and infrastructure protected by integrated security systems. Federal programs provide energy, water, state security, and health care. Public schools rely on memorization-style curriculum, yet are seldom capable of producing adaptive learners.

However, the narrow perspective of the homogenous decisionmakers leads to unforeseen outcomes, including the collapse of the creative class. Societal hierarchies persist as privileged families remove their children from public schools in favor of elite education institutions that enhance a child’s problem-solving skills and thus enhance their future employment opportunities.

Social entrepreneurship: How communities solve problems

Dark clouds give way to the morning’s rays. Jermaine awakes to the pungent aroma of creosote oils mixed with ozone—a smell of rain and the promise wild flowers in the Southwest. The open window lets in light, fresh air, and the sounds of friends and neighbors. Jermaine has worked late at the CORE (Collective Of Researchers and Entrepreneurs) facility yesterday. CORE helps the City of Phoenix to address the contaminated groundwater just north of the Sky Harbor Airport. The plume had been contained in the 1990s and just left there. The effects of drought in the Salt, Verde, and Colorado Rivers have prompted the city to revisit this long abandoned water reserve. Jermaine’s formal education and leadership characteristics have made him an obvious choice to lead this project. CORE is comprised of financiers, lawyers, citizens, scientists, engineers, city water planners, and a rotating set of college professors and local high school teachers. CORE takes on challenges and enters into problem-oriented competitions formally organized by federal, tribal, state, county, and city governments. Jermaine is not going to “make it big.” Then again, Jermaine didn’t study hydrogeology to get rich. Back in 2010 Jermaine heard nZVI (nanoscale Zero Valent Iron) could solve the problem, but testing stalled and nZVI was abandoned. Today, in 2050, he aims to renew decontamination efforts in Phoenix.

87

Social entrepreneurship innovation: Suppositions

Social entrepreneurship innovation attempts to bring civil society together to solve challenges. City, state, federal, and international governments work to identify problems that demand technical and social change. This practice of collectively addressing societal challenges is enabled by large-scale and continuous collaboration between different sectors of society.

Citizens and civic organizations partner with researchers to discover the root causes of persistent challenges. Strategic plans are drafted to ameliorate the symptoms, while targeting the underlying causes. The science policy agenda is attuned to directly addressing societal challenges via funding priorities and awards. Risk mitigation relies on clear roles, which are transparent to everyone. For example, cities incentivize construction firms to cut down on urban heat island effects.

Coordinated efforts in tight-knit urban neighborhoods allow pedestrians, carbon fiber bicycles, ultra-lightweight cars, trains, and buses to move along segmented streets shaded with native vegetation and overhanging building facades. Concerted efforts by citizens, city leaders, and corporate partners slowly address historical groundwater contamination, aging highways, and underinvestment in public education. The pursuit of healthy, vibrant, just, and diverse communities unites the city and its citizens.

Yet the challenge of long-term collaboration creates burnout among stakeholders. Retaining citizen buy-in and maintaining the city infrastructure are not trivial. Cultural expectations for immediacy and simplicity confront a thorough process of problem analysis, solution evaluation, and program implementation that takes decades.

Open-source innovation: Suppositions

The scenario narrative at the beginning of this article and its corresponding image depicts Phoenix in 2050 with open-source innovation as the organizing force for urban life. Individuals are incentivized through competitions that rely on problem-solving and creative-thinking skills. Public organizations and private companies both derive valuable new ideas by rewarding people with those skills.

Children and adults of all ages learn from a personalized, skills-based education system. This education model supports a competitive, creative population attuned to individual rewards. Government agencies post small daily challenges and larger collective problems on challenge boards. Individuals advance based on their ability to solve more and more “wicked” problems. Reports on the accomplishments of top-tier “Creators” bombard social media with opportunities to reap the rewards offered by public challenges. Corporate R&D relies on collective open forums that reward success and offers smaller incentives for lesser contributions such as product feedback.

There are almost no rules or restrictions on innovation. Individuals are responsible for the objects they make and release into the world. The city is awash in nanotechnological applications, built atom-by-atom with 3D printers to specified tolerances at a moment’s notice. 3D Printers are widely available, allowing people to construct most of the products they desire at home, including bicycles, cars, small airplanes, weapons, and solar panels. Individuals just need the time, materials, and understanding to make what they want.

The electrical energy grid, once thought vulnerable to solar power’s variable loading rates, no longer relies on centralized distribution of electricity. Hyperlocalized solar and geothermal energy sources are ubiquitous across the city. The aging grid slowly rusts in the desert air. Yet the city continues to experience stress. Balancing water use and natural recharge rates is still an unrealized goal.

Open-source innovation is not without societal inequities, as preoccupation with individual achievement and meritocracy enforces social hierarchies. The urban footprint expands, covering the desert with single-story residences and perpetuating the reliance on personal automobiles and highways.

Shaping innovation Scenarios need to be treated as a bundle, not in isolation: the power of scenarios is in what can be learned by comparing them. The scenarios presented here differ significantly in the role of public participation, public funding, risk mitigation, and the distribution of goods and services for the development of cities worldwide.

Public participation shapes innovation. The role the public plays in technological innovation varies across the scenarios and affects the development of the city. In the market-pull scenario, citizens are viewed as consumers of innovative technologies; public participation is limited to the later stages of innovation. Social entrepreneurship innovation offers the public opportunities to engage at key points throughout the innovation process, from problem identification to testing and ultimately implementation of solutions. Closed collaboration innovation retains power within an elite decisionmaking body, typically a government-industry partnership. The public is subjected to its decisions. Open-source innovation provides skilled people (Creators) with opportunities to reshape the city; while people without the requisite skills or desire are bystanders. The scenarios show how the public is engaged in, or subjected to, innovation, and explores the implication for urban development.

Responsiveness to societal demands by public funding agencies informs outputs. Government funding is often analyzed in terms of return on investment and knowledge creation. Levels of public investments in science, technology, and innovation are supposed to correspond to the extent of resulting public benefit. Our scenarios highlight stark differences in the relationship between investments and how outputs from those investments serve the public interest. In the market pull scenario, there is little direct connection to the public interest; success is exclusively measured by market returns, with limited regard for externalities or negative consequences. Social entrepreneurship innovation demands that government funding be highly attuned to solving problems to serve the public interest. Closed collaboration innovation prioritizes large-scale national investments to satisfy the public interest in areas such as national defense, reliable and constant electricity, and affordable health care. Such a one-size-fits-all approach does not readily adapt to challenges unique to specific geographies, so subpopulations are often overlooked. Open-source innovation attempts to address legacy issues by incentivizing talented individuals with innovation awards offered by government agencies. These are four very different ways in which the public interest is served by public investments in science, technology, and innovation.

Anticipation and risk mitigation enables innovation. Vehicles can safely travel at higher speed if mechanisms are in place to stop them before collisions occur. Investors (public and private) in technological innovation should explore this metaphor. Proper brakes calibrated by advances in technology assessment and with the power to halt dangerous advances could revolutionize the speed at which problems are solved. The scenarios each address risk in different ways. Market-pull innovation addresses risks reactively. Negative effects on people and the environment are identified after the problems are observed and deemed unacceptable. This is like driving forward while looking in the rearview mirror. Social entrepreneurship innovation attempts to delineate clear and transparent roles for risk mitigation. Potential solutions are tested iteratively as a means to anticipate foreseeable risks and assess outcomes before full-scale implementation. This approach is slow and methodical. Closed collaboration innovation takes known hazards (such as terrorism or climate change) as the starting point and attempts to mitigate the risks through innovation, but seems to lack the adaptability to address future outcomes. Open-source innovation presupposes that the Creators are responsible for their own actions. This assumption links risk mitigation to the individual, and thus to each Creator’s capacity to foresee the outcomes of the technology she or he creates. These risk mitigation and adaption approaches are not the same as the four models of innovation, but the connections were strongly consistent throughout the scenario development process. Innovation policy needs to address risk mitigation not as slowing down progress, but as a means to allow faster development if proper brakes are in place to halt dangerous developments.

Distribution: Pathways to realize innovation benefits. The benefits of innovation vary from personal consumer products (well suited to market pull with high levels of competition) to universal goods such as water that are delivered through large-scale infrastructure (well suited to closed collaboration). Social entrepreneurship innovation delivers nanotechnologies to address societal challenges that lend themselves to a technological solution. Closed collaboration innovation is primarily organized to integrate nanotechnology into large systems, especially if the technology increases system control and efficiency. Thus, public infrastructures, such as traffic sensors, electricity monitoring and distribution networks, and large public health data systems would be amenable to a closed collaboration approach. Open-source innovation provides benefits personalized by the needs of the creator. Programmable machines that print 3D structures and functional objects could make nanotechnology ubiquitous for the creator class. The public interest is well served by a diversity of delivery mechanisms for different products and services. An overreliance on a single mechanism such as open-source innovation will prove ineffective in delivering goods and services to society.

Integrated foresight Albert Einstein’s oft-quoted aphorism, “We can’t solve problems by using the same kind of thinking we used when we created them,” calls out the need for alternative innovation models. Each scenario depicts a range of outcomes that reflect a connection between the mode of innovation and society’s ability to address its urban sustainability challenges. The market-pull scenario explores the implications of focusing singularly on economic development. This seems to perpetuate negative externalities, including the continued segregation of socioeconomic classes and dependence on carbon-intensive transportation and energy systems. Social entrepreneurship innovation takes sustainability challenges as its starting point and solves problems collaboratively, albeit slowly. It relies on social and behavioral changes as well as technological solutions. Closed collaboration innovation addresses urban sustainability challenges through the centralized management of infrastructure. Open-source innovation addresses certain urban sustainability challenges through the collective efforts of skilled individuals, while other challenges remain unaddressed or worsen. As a set, the four scenarios allow decisionmakers to appreciate the benefits and challenges associated with each innovation approach—and the need for diverse strategies to apply emerging technologies to the design of our cities.

Our integrated approach to foresight, with its strong connections to places and people, suggests changes in science, technology, and innovation policy. Can the scenarios trigger any of those changes? We have presented them in a variety of settings from high school and university classrooms to academic conferences. The film has been used in deliberation among professionals and policymakers. To date, however, there is no evidence that the scenarios are leading to constructive strategy-building exercises that shape science, technology, and innovation policies toward a sustainable future for Phoenix. Nevertheless, our efforts have led to reflections among stakeholders and afforded them the opportunity to consider value-laden questions such as: What future does our society want to create? This project was not commissioned directly by policy or business stakeholders. Therefore, the primary outcomes may well rest in the newly developed capacities of the design students, stakeholder partners, and faculty to consider the complex yet often invisible interconnections between our technological future and the choices that we make at every level of society. Our hope is that such insights will influence the way the project participants pursue their professional efforts and careers, and with this contribute to innovation processes that yield sustainable outcomes for cities around the world.

Recommended readings

R. W. Foley and A. Wiek, “Patterns of Nanotechnology Innovation and Governance within a Metropolitan Area,” Technology in Society 35, no. 4 (2014): 233-247.

A. Wiek and R. W. Foley, “The Shiny City and Its Dark Secrets: Nanotechnology and Urban Development,” Curb Magazine 4, no. 3 (2013): 26–27.

A. Wiek, R. W. Foley, and D. H. Guston, “Nanotechnology for Sustainability: What Does Nanotechnology Offer to Address Complex Sustainability Problems?” Journal of Nanoparticle Research 14 (2012): 1093.

A. Wiek, D. H. Guston, S. van der Leeuw, C. Selin, and P. Shapira, “Nanotechnology in the City: Sustainability Challenges and Anticipatory Governance,” Journal of Urban Technology 20, no. 2 (2013): 45–62.

Rider W. Foley (rwf6v@virginia.edu) is an assistant professor in the Engineering and Society Department at School of Engineering and Applied Science at the University of Virginia and affiliated with the Center for Nanotechnology in Society, Consortium for Science, Policy, and Outcomes at Arizona State University. Darren Petrucci is a professor at the School of Design at Arizona State University. Arnim Wiek is an associate professor at the School of Sustainability and affiliated with the Center for Nanotechnology in Society, Consortium for Science, Policy, and Outcomes at Arizona State University.

Exposing Fracking to Sunlight

ANDREW A. ROSENBERG
PALLAVI PHARTIYAL
GRETCHEN GOLDMAN
LEWIS M. BRANSCOMB

The public needs access to reliable information about the effects of unconventional oil and gas development in order for it to trust that local communities’ concerns won’t be ignored in favor of national and global interests.

The recent expansion of oil and natural gas extraction from shale and other tight geological formations—so-called unconventional oil and gas resources—has marked one of the most significant changes to the U.S. and global economy so far in the 21st century. In the past decade, U.S. production of natural gas from shale has increased more than 10-fold and production of “tight oil” from shale has grown 16-fold. As a result, natural gas wholesale prices have declined, making gas-fired power plants far more competitive than other fuel sources such as coal and nuclear power.

Oil and gas extraction enabled by hydraulic fracturing has contributed to a switch away from coal to natural gas in the U.S. power sector. Although that switch has been an important driver for reducing U.S. carbon emissions during combustion for electricity generation and industrial processes, carbon emissions from natural gas do contribute substantially to global warming. Thus, from a climate standpoint, natural gas is less attractive than lower- and zero-carbon alternatives, such as greater energy efficiency and switching to renewable energy. In addition, the drilling, extraction, and transportation through pipelines of oil and natural gas results in the leakage of methane, a potent greenhouse gas that is 25 times stronger than carbon dioxide.

Domestic energy demand and supply changes are also beginning to shift U.S. geopolitical dynamics with large fossil fuel producers such as Russia and the Middle Eastern states. Although much of the rhetoric—including a significant industry advertising campaign by U.S. gas producers—focuses on the benefits to the nation of a domestic supply of energy, natural gas and oil produced in the United States are part of a global marketplace. For example, just a few years ago, terminals were being built both onshore and offshore in U.S. waters to import liquefied natural gas (LNG) for energy in the New England region. Now a major public policy debate is under way about whether the United States should export natural gas. As a consequence some of these same terminals are being dismantled and others may be redeveloped for the export of LNG.

Meanwhile, competing desires for less-expensive energy and associated chemical raw materials for plastics, iron, and steel products manufacturing in the United States has created political pushback against allowing exports. But with uncertainty about supply from Russia for major markets in Europe due to political turmoil, and rapidly growing energy needs in China and India, among others, upward price pressure on natural gas as well as oil is almost certain to follow, keeping the debate on the geopolitics of the issues alive in the days to come.

What is certain though is that a consistent supply of domestic energy, and derived chemicals that serve as raw materials feedstock for manufacturing, will support a 20th-century–style economy with fossil fuels as its base. But what does that mean for the development of renewable energy sources, or alternatives to plastics, industrial chemicals, or natural resources in the United States? What does large-scale investment in these resources mean for our mitigation of carbon emissions and adaptation to climate change impacts?

At the same time that much of the attention is focused on these national and global implications, it can be forgotten that considerable uncertainty persists about the local implications of fracking for communities and the environment. Whereas the larger-scale global questions may be harder to answer, the proper application of federal, state, and local laws and better public information can go a long way toward answering critical questions on the local level.

Examining production

Despite the rapid pace of development of unconventional oil and gas resources enabled by fracking across the United States, and its influence on domestic and international energy markets, there is remarkably little independent information available to the public on the effects, both positive and negative, of such an undertaking. And because fuller analysis to answer these questions is not available, the American people and their elected representatives have not had a chance to make informed choices about whether and how unconventional oil and gas development occurs.

This is, in part, due to the lack of comprehensive regulation of unconventional oil and gas development at the federal level. Because the oil and gas industry secured many exceptions to our major environmental laws, oversight of this new, fast-paced development has fallen primarily to the jurisdiction of the states, which often lack the resources to require and enforce data collection and sharing. So while discussion of risks and concerns associated with unconventional oil and gas development has taken place in the press, in academic literature, at federal agencies, and among various special interest and advocacy groups, such conversations have occurred largely outside of any clear, overarching policy framework.

At the same time, concerted actions by industry severely limit regulation and disclosure, which has left citizens, communities, and policymakers without access to information on the full range of consequences of shale resource development in order to make fact-based decisions. Compounding this problem is the fact that much of the scientific discourse on the technical dimensions of unconventional oil and gas development, including the engineering of fuel extraction, production, transportation, refining and waste disposal, not to mention the economic, environmental, and social impacts, has failed to adequately inform the public conversation.

In the absence of comprehensive and credible information, readily available to the public, conversations and decisions on unconventional oil and gas development in the United States have been marred by an extremely polarized debate over the risks, benefits, and costs of development. Development has expanded in many communities with little clear requirement for state and local jurisdictions to collect the information needed to inform the public, adequately regulate the industry, and ensure public health and safety. Worse still, most sites have been developed without baseline studies of environmental conditions before drilling and without any ongoing monitoring of changes to air and water quality during and after development, perpetuating the cycle of insufficient data collection.

Science needs to be part of the choices we make in a democratic society. In order to reach decisions with the direct involvement of the citizenry, scientific information that is independent, credible, and timely must be accessible to the public and play an important role in informing decisions.

Hydraulic fracturing involves risks that are both similar to and different from those of conventional oil and gas development (Table 1). Risks that are qualitatively different include the volume, composition, use, and disposal of water, sand, and chemicals in the hydraulic fracturing process; the size of well pads; and the scale of fracking-related development. Importantly, the advent of hydraulic fracturing and horizontal drilling has brought development to new and more-populated areas, increasing development’s intersection with communities. These factors can contribute to rapid social disruption as well as environmental damage, particularly to regions that have not previously been exposed to the oil and gas industry.

TABLE 1

76

Unfortunately, the social costs of unconventional oil and gas development have not been analyzed in nearly the same detail as the geopolitics of energy. These social costs include public health and environmental effects of fossil fuel production and the manufacturing of products enabled by this boom (Table 1). And these social costs range from local effects on communities to implications for global warming. In addition, environmental and socioeconomic concerns around oil and gas development can be different for different communities. For example, western states and localities tend to be more concerned about effects on water availability, whereas eastern states and localities tend to focus more on the impact on water quality. Communities with existing oil and gas facilities may worry about expanded development, whereas those that have not previously hosted the industry are often concerned about potential new environmental and socioeconomic effects, such as strain on public services, new pipelines, and heavy truck traffic.

Because the data on these effects are either lacking or incomplete, at least some states (e.g., Maryland, New York, and California) and localities have responded by enacting moratoriums or outright bans on development. Fixed-duration moratoriums are usually intended to allow time for either the assessment of environmental and public health impacts or for the formulation of an adequate regulatory structure for development. To mitigate many of the risks associated with unconventional oil and gas development, there is a fundamental need for comprehensive baseline analysis followed by monitoring of effects. The resultant information must be publicly available to the greatest extent practicable, so that citizens and elected officials have open access to the scientific information in order to decide if and how to regulate development in their communities.

The government role

Given the dramatic impact of unconventional oil and gas development on the U.S. economy, energy future, and industrialization of rural landscapes, it is more than a little surprising that there is no comprehensive governance system in place to safeguard the public trust and to facilitate information collection and sharing. As development has proceeded, there has been a concerted push by industry to reduce the federal government’s role in management and relegate any regulatory oversight to the state level. This push has resulted in a long list of special exemptions for the oil and gas industry from existing major environmental federal laws (Table 2).

TABLE 2

77

Importantly, public trust is not just a concern for politicians or affected communities but must also be earned by industry. Greater trust benefits companies by building a better relationship with the communities where they operate.

Despite this failure to manage the impacts of unconventional oil and gas production, agencies like the Environmental Protection Agency (EPA) have in the past been effective at environmental regulation. Federal environmental laws and the accompanying regulatory systems for most types of industrial development are well articulated. They are largely implemented by the states with federal support and oversight, and most importantly have resulted in major improvements in the quality of air and water, toxic waste cleanups, and public health over the past half century. In addition to setting national standards for many industrial activities, the U.S. system of environmental laws provides extensive opportunity for informing the public and seeking their input to the policymaking process. This open process certainly requires time and effort and entails some cost, but citizens in a democratic society have a right to be informed and to voice their views. And the government, as well as industry, has an obligation to listen and be as responsive as possible.

Although states have environmental protection statutes that are often in parallel to the federal mandates, there is substantial inconsistency in their application and often a limited capability at the state level to assess, monitor, and enforce requirements. State regulation often relegates public input to notice and comment on permit applications. Public meetings may or may not be required. There is no clear requirement for alternatives to be considered, nor for a broader analysis of public health or environmental effects as there would be under federal authority. Therefore, exemptions from key federal statutes such as the Clean Air Act, Clean Water Act, and CERCLA (Superfund) for oil and gas development are a major concern. They also result in inconsistency in standards and management, lack of coordination with federal agencies, and the loss of basic protections for the public, including the opportunity to have greater levels of input. Together, all of these legal exemptions limit the gathering of critical scientific information on the effects of fracking on air and water quality, and consequently undermine public trust.

Earning public trust

In July 2013, the Center for Science and Democracy at the Union of Concerned Scientists held a forum in Los Angeles on Science, Democracy, and Community Decisions on Fracking. The forum brought together a diverse collection of stakeholders, including scientists, policy specialists, industry, local government officials, and community groups. One of the oft-repeated points during the forum was the importance of communities developing trust in both industry and government. Community stakeholders who participated expressed the need to be included in the process, for their voices and concerns to be heard, and for their health and well-being to be considered a priority.

Open access to scientific information can help earn the public trust. Unfortunately, efforts to manipulate or otherwise impede the information flow to both the public and the scientific community have significantly undermined the public’s trust that risks are being minimized and competently managed. These efforts include the failure to fully disclose the chemicals used in fracking, the blocking of access to drilling sites for independent scientists, the lack of disclosure of industry involvement in academic studies of fracking, and legal settlements that prevent the release of industry-collected data. In fact, too many cases in which incidents of pollution or other problems have occurred have been met by concerted efforts by industry to quickly contain the information, block access to well sites, and impose legal confidentiality requirements as part of compensation for losses. The resulting lack of access to information makes it more difficult to document cases of air and water contamination and develop risk reduction strategies, further diminishing public trust in industry and government.

An integrated system of data collection, baseline testing, monitoring, and reporting is needed in order for scientists and decisionmakers to better understand and manage risks. The coordination and provisioning of such comprehensive data in a format that is easily available and accessible to health care and emergency workers as well as the affected communities are equally desirable.

Importantly, public trust is not just a concern for politicians or affected communities but must also be earned by industry. Greater trust benefits companies by building a better relationship with the communities where they operate. An open and responsive company has the potential to gain greater public support and mitigate future risks to business. Instead of pushing back against regulatory controls, the oil and gas industry can gain greater consistency and certainty by allowing the already well-developed system of federal laws to follow their charge of protecting public health and the environment. Part of the value of these laws is that they level the playing field so that all businesses work to the same standards. Working with, rather than against, the system of governance will result in greater sustainability of the industry itself and help mitigate against the fact that even a single accident or bad actor can cause a public and regulatory backlash against the entire industry.

To overcome the gridlock and suspicions in public conversations on fracking, decisionmakers should immediately enact federal policies that would require states to implement comprehensive baseline analysis and monitoring programs for air and water at all well sites. The collected information must be made publicly available and accessible to provide communities with trustworthy information about environmental quality and potential impacts on public health. This need is so fundamental that any delay will continue to add to the ill will toward and distrust of corporate actors. Plus, the costs of such programs are relatively modest as compared to either societal costs or industry profits.

In the important discussion of the national and global political, economic, and climate implications of fracking, we should not forget the need to understand and address its local impacts. Given the potential costs and benefits of unconventional oil and gas resources development on the world and the United States, debates over the proper course for energy development will certainly continue. But comprehensive and independent air and water quality data collection, before, during, and after fracking, made publicly accessible, along with a governance structure for monitoring, enforcement, and managing risks, will go a long way in informing the debate, building public trust, and securing better outcomes for industry and our democratic system.

Recommended reading

Energy Information Administration (EIA), Annual Energy Outlook 2013 with Projections to 2040 (Washington, DC: U.S. Department of Energy, 2013) available online at www.eia.gov/forecasts/aeo/pdf/0383%282013%29.pdf

IHS, America’s New Energy Future: The Unconventional Oil and Gas Revolution and the U.S. Economy, Vol. 3: A Manufacturing Renaissance—Executive Summary (Englewood, CO: IHS, 2013).

M. Levi, The Power Surge: Energy, Opportunity, and the Battle for America’s Energy Future (Oxford, UK: Oxford University Press, 2013).

R.V. Percival, C. H. Schroeder, A. S. Miller, and J. P. Leape, Environmental Regulation: Law, Science and Policy (New York, NY: Aspen Publishers, 2003).

Resources for the Future, State of State Shale Gas Regulation (Washington, DC: Resources for the Future, 2013); available online at www.rff.org/rff/documents/RFF-Rpt-StateofStateRegs_Report.pdf

Union of Concerned Scientists, Toward An Evidence-based Fracking Debate: Science, Democracy, and Community Right to Know in Unconventional Oil and Gas Development (Cambridge, MA: UCS, 2013); available online at www.ucsusa.org/assets/documents/center-for-science-and-democracy/fracking-report-full.pdf

Union of Concerned Scientists, Gas Ceiling: Assessing the Climate Risks of An Overreliance on Natural Gas (Cambridge, MA: UCS, 2013); available online at www.ucsusa.org/assets/documents/clean_energy/climate-risks-natural-gas.pdf

Andrew A. Rosenberg (arosenberg@ucsusa.org) is director, Pallavi Phartiyal is program manager and senior analyst, and Gretchen Goldman is lead analyst at the Center for Science and Democracy, Union of Concerned Scientists, Cambridge, MA. Lewis M. Branscomb is professor emeritus of public policy and corporate management at Harvard University’s Kennedy School of Government, and adjunct professor at the University of California, San Diego, in the School of International Relations and Pacific Studies.

Imagining Deep Time

An exhibit at the National Academy of Sciences Building, Washington, DC

“Geohistory is the immensely long and complex history of the earth, including the life on its surface (biohistory), as distinct from the extremely brief recent history that can be based on human records.”

—Martin J.S. Rudwick, science historian

From a human perspective, mountain ranges seem unchanging and permanent; yet, in the context of geological time, such landscapes are merely fleeting. Their change occurs on a scale far beyond human experience. Whereas we measure time in terms of years, days, and minutes, geological change occurs within the scale of deep time, the gradual movement of evolutionary change.

The concept of deep time was introduced in the 18th century, but it wasn’t until the 1980s that John McPhee coined the term “deep time” in his book Basin and Range. The Imagining Deep Time exhibition, which contains 18 works by 15 artists, looks at the human implications of deep time through the lens of artists who bring together rational and intuitive thinking. The featured artists use a wide range of styles and media, including sound, photography, painting, printmaking, and sculptures made of everyday materials such as mirrors, LED lights, motors, and gears. The exhibition explores the role of the artist in helping us imagine a concept outside the realm of human experience.

Artists featured are Chul Hyun Ahn, Alfredo Arreguín, Diane Burko, Alison Carey, Terry Falke, Arthur Ganson, Sharon Harper, Mark Klett, Rosalie Lang, David Maisel, the artistic team Semiconductor, Rachel Sussman, Jonathon Wells, and Byron Wolfe.

Imagining Deep Time is on exhibit at the National Academy of Sciences Building, 2101 Constitution Ave., N.W., Washington, DC, August 28, 2014, through January 15, 2015.

—Alana Quinn and JD Talasek

65
Chul Hyun Ahn Void. Cast acrylic, LED lights, hardware, mirrors. 90 x 71½ x 12¼ inches. 2011

Baltimore-based artist Chul Hyun Ahn uses repetition to evoke infinite cycles. His work recalls that of minimalists such as Dan Flavin and Donald Judd who incorporated the use of everyday materials to create experiential spaces.

66
Terry Falke Cyclists Inspecting Ancient Petroglyphs, Utah. Digital chromogenic print. 30 x 40 inches. 1998

A cyclist points at marks left on the earth by humans–petroglyphs of human figures (whose heads resemble the helmets worn by the cyclists) and bullet holes. The line of the road suggests an added human-made stratum reminding us that, although humans’ presence in the continuum of deep time is small, we have left our mark.

67
Alfredo Arreguín The Age of Reptiles. Oil on canvas. 60 x 48 inches. 2012

The patterns in Alfredo Arreguín’s paintings are based on pre-Aztec images, Mexican tiles, and geometric patterns. His brilliantly colored canvas combines childhood memories of Mexican culture and landscapes with imagery inspired by animals that are only known to have existed through scientific research. Arreguín invites us to imagine how our environment has evolved and how we are influenced by cultural experience.

68
David Maisel Black Maps (Bingham Canyon, UT I). Archival pigment print. 29 x 29 inches. 1988

According to the artist, the title Black Maps, which comes from a poem by Mark Strand, refers to the notion that although these images document the facts of these sites, they are essentially unreadable, much as a map that is black would be. As Strand writes, “Nothing will tell you where you are/Each moment is a place you’ve never been.” David Maisel considers these images not only documents of blighted sites, but also poetic renderings that reflect the human psyche that made them.

69
Sharon Harper
Left: Sun/Moon (Trying to See through a Telescope) 2010 May 27 10:48:35 AM.2010 May 27 11:08:34 AM
Right: Sun/Moon (Trying to See through a Telescope) 2010 May 27 10:48:35 AM.2010 May 27 11:08:34 AM 2010 Jun 19 8:16:30 PM.2010 Jun 19 8:23:40 PM, No. 2 2010
Ultrachrome print on Canson Rag Photographique paper. 58⅛ x 17 inches each. 2010

One way we experience time is through cycles such as the phases of the sun and the moon or through the passing of seasons. Yet the act of representing these cycles is different than actually experiencing them. Sharon Harper’s Sun/Moon series considers the act of “seeing” as mediated through both a telescope and a digital camera.

70
Rosalie Lang Inner Life. Oil on canvas. 20 x 22 inches. 2009

Rosalie Lang’s paintings draw inspiration from the aesthetic of rocks photographed along the Pescadero, CA coast. She writes, “I don’t know a rock until I paint it,” underscoring the importance of mark-making in observation and learning. The realism of Lang’s paintings of rock surfaces, combined with the absence of horizon or other cues to scale, may cause viewers’ perception to oscillate between reality and abstraction.

71-01
Jonathon Wells Boston Basin. Digital inkjet print. 28½ x 78 inches. Photographed 2004, composited 2005

This work depicts a 16-mile-wide by four-and-a-half-mile-deep view of Boston Basin looking west toward downtown as if the viewer were positioned in the harbor. The large city seems miniscule in comparison to what lies beneath. The Geologic Map of Massachusetts (1983) by National Academy of Sciences member E-an Zen and others provided the basis for constructing the image.

71-02
Diane Burko Columbia Triptych II Vertical Aerial 1981–1999, A, B, C after Austin Post and Tad Pfeffer. Oil on canvas. 76 x 36 inches, each canvas. 2010

Diane Burko’s inspiration for these paintings is a montage of five aerial photographs of the lower reach of the Columbia Glacier, Alaska, taken on October 2, 1998. Superimposed on the montage are plots of selected terminus positions from 1981 to 1999. Combining the aesthetics of photography, scientific notations, and landscape painting, Burko asks us to consider the role of art in communicating about climate change.

72-01
Alison Carey Stethacanthus, Pennsylvanian Period, 280–310 mya. Silver gelatin on black glass. 9 x 23 inches. 2005
72-02
Criptolithus & Eumorphocystis, Ordovician Period, 440–500 mya. Silver gelatin on black glass. 9 x 23 inches. 2005
72-03
Crinoids, Mississippian Period, 310–350 mya. Silver gelatin on black glass. 9 x 23 inches. 2005

These photographs come from Alison Carey’s series Organic Remains of a Former World, representing ancient marine environments from each Paleozoic era. Carey built clay models of extinct vertebrates and invertebrates, submerged them in multiple aquariums, and photographed the constructed tableaus. She used scientific data and illustrations of fossils to inform her ideas.

73
Rachel Sussman Dead Huon Pine adjacent to living population segment #1211-3509 (10,500 years old, Mount Read, Tasmania) Archival pigment print. 44 x 54 inches. 2011

This photograph depicts Lagarostrobos franklinii, a conifer species native to Tasmania, killed by fire. This scene is bordered by living trees with an extraordinary legacy. The age of the stand was determined by dating pollen from Lake Johnston which matched the genetic make-up of the living trees and carbon-dates to 10,500 years old. The photograph is from Sussman’s series The Oldest Living Things in the World, the result of her collaboration with scientists to identify and photograph organisms that are at least 2,000 years old.

21st Century Inequality: The Declining Significance of Discrimination

ROLAND FRYER

Unconventional but effective strategies for public education can provide significant advances in student achievement nationwide.

TODAY I want to talk about inequality in the 21st century, in particular on the decline in the significance of discrimination and the increase in the significance of human capital.

Let me start with some basic facts about the achievement gap in America. If you listen to NPR or tune into 60 Minutes, you probably get a sense that the United States is lagging behind other countries in student achievement and that there is a disturbing difference in the performance of racial groups.

For example, on average 44% of all students, regardless of race, are proficient in math or reading in 8th grade. That’s disheartening, but far from the worst news. In Detroit, 3% of black 8th graders are considered proficient in math—that’s 3%. In some places, such as Cleveland, the achievement gap between white and black students is relatively small, but the reason is that the white students are not doing well either. In the District of Columbia, roughly 80% of white 8th graders, but only 8% of their black classmates, are proficient in math.

Many people will object that test scores do not measure the whole child. That’s true, but I will argue that they are important.

My early training and research in economics was not linked to education, but I was asked in 2003 to explore the reasons for the social inequality in the United States. I began by looking at the National Longitudinal Survey of Youth, focusing on people who were then 40 years old. Compared to their white contemporaries, blacks earned 28% less, were 27% less likely to have attended college, were 190% more likely to be unemployed, and 141% more likely to have been on public assistance. These grim statistics are well known and are often used to illustrate the power of racial bias in U.S. society.

I decided to trace back through the lives of this cohort to try to identify the source of these disparities. One obvious place to look was educational achievement. I went back to the test scores of this cohort when they were in 8th grade and did some calculations. If one compared blacks and whites who had the same test scores in 8th grade, the picture at age 40 was dramatically different. The difference in wages was 0.6%, the difference in unemployment was 90%, the difference in public assistance was 33%, and blacks were actually 137% more likely to have attended college.

That was easy. In two weeks I reported back that achievement gaps that were evident at an early age correlated with many of the social disparities that appeared later in life. I thought I was done. But the logical follow-up question was how to explain the achievement gap that was apparent in 8th grade. I’ve been working on that question for the past 10 years.

I am certainly not going to tell you that discrimination has been purged from U.S. culture, but I do believe that these data suggest that differences in student achievement are a critical factor in explaining many of the black-white disparities in our society. It is no longer news that the United States is a lackluster performer on international comparisons of student achievement, ranking about 20th in the world. But the position of U.S. black students is truly alarming. If they were to be considered a country, they would rank just below Mexico in last place among all Organization of Economic Cooperation and Development countries.

How did it get this way? When do U.S. black students start falling behind? It turns out that development psychologists can begin assessing cognitive capacity of children when they are only nine months old with the Bayley Scale of Infant Development. We examined data that had been collected on a representative sample of 11,000 children and could find no difference in performance of racial groups. But by age two, one can detect a gap opening, which becomes larger with each passing year. By age five, black children trail their white peers by 8 months in cognitive performance, and by eighth grade the gap has widened to twelve months

Remember, Horace Mann told us that public education was going to be the great equalizer; it was going to compensate for the inequality caused by differences in income across zip codes. That was the dream.

Unfortunately, what happens is that the inequality that exists when children begin school becomes even greater during schooling. The gap grows not only across schools, but within the same school, even with the same teacher. This means that even for children from the same neighborhood, the same school, and the same teachers, academic performance diverges each year in school.

I spent two or three years trying to figure out what factors could explain this predicament. I looked at whether or not teachers were biased against some kids or groups. I looked at whether or not kids lost ground during the summer. I looked at various measures of school quality. I looked at the results of numerous different types of standardized test. None of these could explain why certain groups, blacks in particular, were losing ground to their peers.

TO REINFORCE HIGH EXPECTATIONS, WE AIMED TO CREATE AN ENVIRONMENT THAT REFLECTED SERIOUSNESS. WE ELIMINATED GRAFFITI AND REMOVED THE BARBED WIRE THAT SURROUNDED SOME OF THE SCHOOLS. WE REGULARLY REPEATED THE GOALS THAT WE EXPECTED STUDENTS TO ACHIEVE.

When I was presenting this finding at a meeting, a woman challenged me to stop focusing on our failures and to let audiences know what works. I said “OK, but what works?” She said more education for teachers, increased funding, smaller class size. I recognized this as the conventional wisdom, but I thought I better examine the data that demonstrate that these strategies are effective.

I discovered that we have actually implemented this approach for many decades. The percentage of teachers with a master’s degree increased from 23% in 1961 to 62% in 2006. The average class size has declined from 22 to 16 students since 1970. Per pupil annual spending grew from $5,000 in 1970 to $12,000 in 2008 in constant dollars. In spite of applying this apparently sound advice, overall student academic achievement has remained essentially flat. Clearly, we need to try something else.

As befits an arrogant economist, my first thought was that this will be easy: We just have to change the incentives. Let’s apply a rational agent model and examine the calculation we are asking students to make. Society is telling them that they will be rewarded for their efforts in school in 15 years when they enter the labor market. As an economist I know that no one has a discount rate that would justify waiting 15 years for a payoff. My solution was to propose that we pay them incentives now to reward good school performance.

Oh my gosh, I wish someone had warned me. No one told me this was going to be so incredibly unpopular. People were picketing me outside my house saying I would destroy students’ love of learning, that I was the worst thing for black people since the Tuskegee experiments. Really? Experimenting with incentives when nothing else seems to work is the equivalent of injecting people with syphilis without informing them?

We decided to try the experiment and raised about $10 million. We provided incentives in Dallas, Houston, Washington, DC, New York, and Chicago. We also, just for fun, added a large experiment with teacher incentives just to cover all our bases, to make sure that we had paid everybody for everything.

The question for us was, first of all, could incentives increase achievement? Second, what should we pay for and how should we structure the incentives? The conventional economic theory is that we should pay for outputs. It follows from that—don’t laugh—that kids should borrow money based on their expected future earnings to pay for tutors or make other investments in their learning to improve their performance. We took a more direct approach, conducting randomized trials that primarily paid for inputs.

In Dallas we paid kids $2 for each book they read. They had to take a test to verify that they actually did the reading. In Houston we paid kids to do their math homework. In Washington we paid kids to attend school, complete homework, score well on tests, and avoid activities such as fighting. We also tried incentives for outputs. In New York we paid kids for good test scores so that the emphasis was completely on outputs. In Chicago we paid ninth graders half the money for attendance and the second half for graduation. The amounts were generous for poor kids. A Washington middle schooler could earn as much as $2,000 per year. In New York, fourth grades could make up to $250 and seventh graders up to $500.

Throughout the experiment we were bombarded with complaints from adults, particularly those who did not have children in the experiment. We never had a kid complain. Well, once we did. I came to one Washington school to participate in a ceremony at which checks were distributed. Before the event started, one kid came up to me and said, “Professor, I don’t think we should be paid to come to school. I think we should pay to come to school because school is such a valuable resource. You should not pay us. We should pay you.”

I was blown away by this. I thought this kid really gets it. About 20 minutes later I was distributing checks in the cafeteria. Kids names were called, and they ran or danced to the front of the room. I called the kid’s name, and he came up. I put his check in my pocket. He said, “What are you doing?” I told him that just 20 minutes earlier he had told me that he should pay me for the privilege of coming to school. He looked at me in a way that only an 11-year-old can and said, “I never said that.”

We found that incentives, if designed correctly, can have a positive return on investment. However, they are not going to close the big gaps that exist between blacks and whites. We did learn that it is more effective to provide kids with incentives for inputs rather than outputs. This contradicts what I learned in my economics training, but it was very clear when I actually talked to the kids. I asked one kid in Chicago, where they were paid for outputs, Did you stay after school and ask your teacher for extra time? No. Did you borrow against your expected income and hire a tutor? No. What did you do? Basically, I came. I tried harder. School was still hard. At some point, I gave up.

The reality is that most of these kids do not know how to get from point A to point B. The assumption that economists make when designing incentives is that people know how to produce the desired output, that they know the “production function.” When they don’t know that, designing incentives is incredibly difficult.

What we learned through this $10 million and a lot of negative press and angry citizens is that kids will respond to incentives—and that incentives to teachers do not have a significant effect on student achievement. They will do exactly what you want them to do. By the way, they don’t do anything extra either. I had this idea that they were going to discover that school is great and to try harder in all of their subjects, even those that do not provide incentives. No. You offer $2 to read a book, and they read a book. They are going to do exactly what you want them to do. That showed me the power, and the limitations, of incentives for kids. I saw that if you really squinted and designed them perfectly, incentives would have a high return on investment because they are so cheap, but they were never going to close the gap.

Something new and different

At the same time I was writing up my incentives paper, I started doing the analysis of Geoffrey Canada’s work in the Harlem Children’s Zone. This changed my entire research trajectory.

With the help of large philanthropic contributions, Canada had developed a creative and ambitious approach to education. A group of Harlem students were randomly selected to attend Canada’s charter schools beginning in 6th grade. A couple of things are important here. One, the lottery winners and losers were, if anything, slightly below the New York City average. This is significant because the students that enroll in charter schools are often above-average achievers from the start.

The evidence of improvement can be seen in the first year, and the gains are even better in the second year. By year three, these students have essentially reached the level of the average white New York student.

Now, I haven’t controlled for anything. If I were to include factors such as eligibility for free lunch, the black students would be slightly outperforming the white students. Their performance in reading improved but not nearly as much as it did in math. I would summarize the results in these simple terms: After three years in Canada’s Promise Academy Charter Schools, the students were able to erase the achievement gap in math and to cut it by a third in reading.

I had never seen results that came close to this. When I first saw the numbers, I thought my research assistant had made a coding error. This was a reason to get excited about the possibility of make a big difference in children’s lives.

Further research into public charter schools enabled me to see that this not just about the Harlem Children’s Zone. Although the average charter school is statistically no better than the average regular public school, there are a number of charter schools achieving the type of results we found in the Harlem Children’s Zone. The research challenge is to identify what they are doing that works.

Let me stop for a story. My grandmother makes a fabulous coconut cake, so I asked her for the recipe. She told me what she does with a finger full of this and palm full of that. When I tried it, the result was a cement block, so I decided that the only way to learn the recipe was to watch her make it. When she grabbed a palm of coconut flakes, I made her put it in a measuring cup. For your future reference, a grandmother’s palm is equal to a quarter cup. It took a long time and annoyed my grandmother, but now I have a recipe I can use and pass down to my children.

If you ask Geoffrey Canada what’s in his secret education sauce, he will say a little bit of this, a little bit of that. You will be moved by his powerfully inspirational speeches, but you will not learn how to build a better school. You’ll just wish that you were also a genius.

To help the rest of us who are not geniuses, we assembled a research team that spent two years examining in detail what was happening at charter schools, some good and some not so good. We hung around. We used video cameras. We interviewed the kids. We interviewed the teachers. We interviewed the principals. We spent hours in these schools trying to figure out what the good ones did and what the not-so-good ones didn’t do.

We found a number of practices that were clearly correlated with better student performance. For teachers, it is important that they receive reliable feedback on their classroom performance and that they rigorously apply what they learn from assessments of their students to what they do in the curriculum and the classroom.

Even low-performing schools know that data are important. When I visited a middling school, they would be eager to show me their data room. What I typically found was wall charts with an array of green, yellow, and red stickers that represented high-, mid-, and low-performing students, respectively. And when I asked what has this led you to do for red kids, they would say that they hadn’t reached that step yet, but at least they knew how many there are.

When I asked the same question in the data rooms of high-performing schools, they would say that they have their teaching calibrated for the three blocks. They would not only identify which students were trailing behind, but would identify the pattern of specific deficiencies and then provide remediation for two or three days on the problem areas. They would also note the need to approach these areas more diligently in future editions of the course.

The third effective practice was what I call tutoring, but which those in the know call small learning communities. It is tutoring. Basically what they do is work with kids in groups of six or fewer at least four days per year.

The fourth ingredient was instructional time. Simple. Effective schools just spent more time on tasks. I think of it as the basic physics of education. If your students are falling behind, you have two choices: spend more time in school or convince the high-performing schools to give their kids four-day weekends. The key is to change the ratio.

The icing on the cake was that effective schools had very, very high expectations of achievement regardless of their social or economic background. My father went to prison when I was a kid. I didn’t meet my mother until I was in my twenties. Fortunately, I had a grandmother who didn’t know the meaning of the word excuse. A high school counselor who was aware of my situation tried to help me by saying that I could be part of a special program that would require only a half day of school and reduce my work load. I knew my grandmother wouldn’t buy that, so I refused.

The essential finding is that kids will live up or down to our expectations. Of course they are dealing with poverty. Of course 90% of the kids have single female head of households. They all have that. That wasn’t news. The question is how are we going to educate them?

We met incredible educators who not only understood the big picture but sweated all the details. One principal had developed a very clever and efficient method for distributing worksheets, exams, and other handouts in class. I’ve never worried about that, so I asked what was the point. She said that every teacher does this in every class many times a day. If we can save 30 seconds each time, we will add several days of productive class time over the course of a year, and these kids need every minute we can give them.

Testing the thesis

I believe there is real value in analyzing the data that provides the evidence that these five strategies work, but there is nothing very surprising or counterintuitive in the findings. The question is why so few schools are implementing these practices.

We set out to discover if there was any reason that public schools could not implement these practices and achieve the expected results. We approached a number of school districts to ask if we could conduct an experiment applying these techniques in some of their schools. I won’t belabor all the reasons we heard for why it was impossible, but suffice it to say that we were not welcomed with open arms. Apparently, it is not practical to increase time in school, provide tutoring, give teachers regular feedback and guidance, use data to inform instructional practice, and increase expectations.

We did eventually find a willing partner in the Houston school district, where the superintendent and the school board were willing to give it a try. We began to work in 20 schools, including four high schools, with a total of 16,000 students. These are traditional public schools. There is no waiting list. There is no sign up. There is no Superman. Nothing complicated. These are just ordinary neighborhood public schools.

All of the schools were performing below expectations and were in line to be taken over by the state. They qualified for the federal dollars for turning schools around. As part of that program, all of the principals and about half the teachers were replaced.

We increased the school day by one hour. We lengthened the school year by two weeks. We also cut down on some curious non-instructional activities. We discovered, for example, that 20 minutes is set aside each day for bathroom breaks. For no additional cost you can increase instructional time just by making kids pee more quickly. How cool is that?

I SAW THAT IF YOU REALLY SQUINTED AND DESIGNED THEM PERFECTLY, INCENTIVES WOULD HAVE A HIGH RETURN ON INVESTMENT BECAUSE THEY ARE SO CHEAP, BUT THEY WERE NEVER GOING TO CLOSE THE GAP.

Second, small group tutoring. We hired more than 400 full-time tutors. They worked with ten kids a day, two at a time during five of the day’s six periods. We offered a $20,000 salary even though we were told that no one would do the job for that amount. In five weeks, we had 1,200 applications. Some were young Teach for America types. Others were retirees from the Johnson Space Center. We decided to focus on math tutoring in what we had found were the critical fourth, sixth, and ninth grades.

For data-driven instruction, we worked with the existing requirements for the Houston schools. For example, Houston sets 212 objectives that fifth graders are expected to achieve. We designed a schedule that would make it possible to reach all the objectives while also including remediation for students and professional development for teachers. A feedback system was designed that resulted in teachers receiving ten times as much feedback as teachers in other Houston schools.

To reinforce high expectations, we aimed to create an environment that reflected seriousness. We eliminated graffiti and removed the barbed wire that surrounded some of the schools. We regularly repeated the goals that we expected students to achieve.

The experiment had a couple of potential fault lines. One, we were taking best practices out of charter schools and trying to implement them in traditional public schools. It could be that those best practices work only with a set of highly motivated teachers and parents. We weren’t sure about that. Second, we had to face all the political realities of a traditional public school. During the three-year experiment, I aged about 24 years. I will never be the same.

But the results made it worth the effort. When we began, the black/white achievement gap in the elementary schools was about 0.4 standard deviations, which is equivalent to about 5 months. Over the three years, our elementary schools essentially eliminated the gap in math and made some progress in reading. In secondary schools, math scores rose at a rate that would close the gap in in roughly four to five years, but there was no improvement in reading. One other significant result was that 100% of the high school graduates were accepted to a two- or four-year college.

Let me put it in context for you. The improvement in student achievement in the Houston schools where we worked was roughly equivalent to the results in the Harlem Children’s Zone and in the average KIPP charter school. But we did this with 16,000 kids in traditional public schools. We are now repeating the experiment in Denver, Colorado, and Springfield, Massachusetts. We actually do know what to do, especially for math. The question is whether or not we have the courage to do it.

The last thing I will show you is a return on investment calculation for a variety of interventions. We calculated what a given level of improvement in achievement would mean for a student’s lifetime earnings and what that would mean for government income tax revenue. Reducing class size costs about $3,500 per kid and results in an ROI of about 6.2%, which is better than the long-term stock market return of about 5%. Expanded early childhood education has an ROI of 7.6%, an even better investment.

“No excuses” charter schools cost about $2,500 per kid and have an ROI of 18.5%. Using the same methodology, we calculated that the investment in our Houston schools had an ROI of 13.4% in the secondary schools and 26.7% in the elementary schools. But that was based on the implementation cost, which I raised from private sources. Houston did not spend anything more per student, so its ROI was infinite.

My journey into education has been similar to that of many other people. I was frustrated with the data, frustrated that we didn’t know which of the scores of innovations were most effective. We took the simple approach of looking closely at the schools that were producing the results we all want to see.

We found five actions that explain roughly 50% of the variation among charter schools. We then conducted an experiment to see if those same five actions would have the same result in a typical urban public school system. The results are truly encouraging. In three years these public school students made remarkable progress in math achievement and some improvement in reading. That’s not everything, but it is far more than what was achieved in decades with the conventional wisdom of smaller classes, more teacher certification, and increased spending.

It is not rocket science. It is not magic. There is nothing special about it. When the film Waiting for Superman came out, people complained that the nation is undersupplied with supermen. But an ordinary nerd like me was able to uncover a simple and readily repeated recipe for progress. Anyone can do this stuff.

One last story. During the experiment in Houston, an education commissioner from another state came to tour Robinson elementary school, one of the toughest in the city. He knew Houston and was familiar with Robinson. At the end of the tour, he pulled me aside. He had one question: “Where did you move the kids who used to go to school here?” I said that these are all the same kids, but they behave a lot differently when we do our jobs properly. They are listening. They are learning. They will live up to the expectations that we have for them.

I was a kid who went to broken schools. Thanks to my grandmother and some good luck, I beat the odds. But one success story is not what we want. What we want are rigorously evaluated, replicable, systematic educational practices that will change the odds.

Roland Fryer is Robert M. Beren Professor of Economics and faculty director of the Education Innovation Laboratory at Harvard University. This article is adapted from the Henry and Bryna David Lecture, which he delivered at the National Academy of Sciences on April 29, 2014.

Perspectives: Retire to Boost Research Productivity!

ALAN L. PORTER

University leaders confront multiple challenges with an aging faculty. Writing in Inside Higher Ed in 2011, longtime education reporter Dan Berrett spotlighted the “Gray Wave” of a growing number of faculty members 60 years of age or older (think baby boomers and increasing lifespans) holding tightly onto their positions, shielded by the lack of mandatory retirement. Many of them have the ability and desire to continue their scholarly work, and they fear multiple losses attendant to retirement. But as they hang on, younger people may be kept off the academic ladder. Might there be “win-win” semi-retirement options to enable faculty to remain productive and engaged, while opening opportunities for new generations?

The answer may be yes, based on one case study—my own. I retired as active faculty in December 2001, at the (rather young) age of 56. I had been jointly appointed as a professor in industrial and systems engineering and in public policy at the Georgia Institute of Technology. Upon my retirement, Georgia Tech indicated that I needed to pick one school to reduce administrative overhead for an emeritus faculty member, so I’m now an emeritus professor, and part-time researcher, in public policy.

Since retirement, my research productivity has escalated. Amused colleagues have kidded that the secret to boosting research output is to retire and that I ought to share this tale. So, I offer this “N = 1 case study” to stimulate thinking about retiree research and to raise some intriguing faculty policy issues.

What retirement did to my research publication activity is captured in Table 1. It compares two five-year post-retirement periods with corresponding pre-retirement periods. The data resulted from searching in Web of Science, skipping my first year of retirement (2002) as ambiguous and leaving out one year in the middle of the overall period, just to facilitate comparison. I also left out four papers published after retirement that reflect research conducted at a small company I joined, to make this a tighter academic “before versus after.”

The data show a sharp increase in research publication. This same phenomenon appeared in examining only my journal articles (the table includes all publications). For this subgroup, there are 14 for 1991–2001 versus 42 for 20032013. Aha—retire and publication productivity triples!

One alternative hypothesis to explain this increased productivity is that I’m a slow learner and that my research has been trending upward throughout the pre- and post-retirement periods. The data don’t conflict with that. (So maybe the elixir is simply aging?)

Citations accrued by the papers (also gathered from Web of Science) provide an additional, if again imperfect, measure of research value. The tally of cites to the 1991–2001 papers is 254 versus 727 for the 2003–2013 papers: another tripling up. And cites per year, based on the average number of publications per year by period, shows a jump from 14 before retirement to 121 after.

Behind the numbers

We can argue over which statistics are most meaningful, but what they all show is that my research productivity has gone up. But why? And so what?

Several factors seem to have contributed to the rise. Although cute, and the stimulus for this reflection, “retire to boost productivity” does not convey enough information to account well for the gain. Let’s scan some additional factors worthy of consideration.

To begin, my teaching load before retirement was moderate, averaging two courses per semester, or four per year. Since shortly after retirement, and with the end of teaching, I’ve reduced my workday by roughly 20%. I now spend roughly half of my work time at Georgia Tech, with essentially no teaching duties and much-reduced administrative chores. But the other half of my work time is now devoted to my role as director of R&D for Search Technology Inc., based in Norcross, Georgia. So more time than before is devoted to my role in the business. My colleagues at the company provide invaluable technical support for the text analyses that underlie most of my research, in which I use VantagePoint software to analyze sets of R&D abstract records. Balancing it all out, I’d guesstimate that under the current arrangements, my weekly hours devoted to research increased post-retirement, but not drastically, from 15 before to 20 after.

TABLE 1

23

The disparity between detailed policies and procedures for the active faculty and the dearth thereof for retired faculty warrants protest and action.

How about university roles in supporting retiree research? Georgia Tech allows me to continue to conduct research and provides essential research infrastructure. Post-retirement, I continued to advise two Ph.D. students through graduation. I cannot advise new ones, although I do serve on Ph.D. dissertation committees and support research assistants from project funding.

I am a technology watcher. My research focuses on science, technology, and innovation intelligence, forecasting, and assessment, so I don’t need laboratory facilities. Shared workspace for graduate students and visiting researchers is a requisite, I’d say. I am usually on campus once weekly for meetings but don’t much use a shared workspace myself. Onsite and remote-access library resources (especially databases such as Web of Science) are essential for my bibliometric analyses.

Georgia Tech provides an institutional base for me to be principal investigator (PI) or participant on funded research (paid on an hourly basis up to a halftime threshold). It also provides regular administrative support for management of my funded research (and charges projects the regular overhead rates, but my fringe benefit rate is very low, as a retiree).

My research gains enormously from ongoing collaboration in Georgia Tech research activities, including through the Program for Science, Technology & Innovation Policy, where I participate in weekly meetings, and through ties with the Technology Policy & Assessment Center. Such access to intellectual stimulus, interchange of ideas, and energetic graduate students eager to do the heavy lifting are, in my view, the major drivers of my observed research productivity gains. (My 14 pre-retirement articles included in this analysis averaged 3.2 authors; the 42 post-retirement ones averaged 3.8.) These arrangements counter the potential isolation of retirement.

Tellingly, the National Science Foundation (NSF) accepts proposals from me as PI or participant, with Georgia Tech or Search Technology providing institutional bases. An NSF Center for Nanotechnology in Society award to Arizona State University has supported Georgia Tech through a subcontract to generate and maintain a substantial “nano” publications and patents data set. This has provided key data resources for a series of analyses and resulting papers—at least 17 since 2008—and has been a major factor in my productivity.

NSF also made a Science of Science & Innovation Policy award to Georgia Tech, with me as PI. Ultimately, some of the work proposed under the award did not take place, but NSF allowed us to reallocate the funds to make small targeted sub-awards intended to generate project-related research in critical areas. I am convinced that this flexible support helped boost research collaboration.

There is also an international component to our work. Building on a 20-year collaboration with Donghua Zhu, a professor of management science and engineering at Beijing Institute of Technology, a string of Ph.D. students from his lab, with funding from China, have spent a year at Georgia Tech. I believe both sides gain as the students work on our projects and learn our approaches to science, technology, and innovation analyses to initiate research pointing toward their dissertations. In 2008–2009, two such students, Ying Guo and Lu Huang, became the model for productive research collaboration, deriving from their initiative, English skills, solid analytical background, and research interests that meshed very well with colleagues at our Program for Science, Technology & Innovation Policy. I have continued to collaborate with Ying and Lu since they returned to their Beijing institution and moved into faculty positions, and these efforts have resulted in nine coauthored papers published between 2011 and 2013—more than with any other colleagues in that period. Active collaboration also continues with their successors who visited Georgia Tech. This international exchange has thus been a huge post-retirement boost to my research collaboration and productivity.

Beyond N = 1

What about evidence beyond N = 1? A modest contingent of scholars studies retirees, devoting attention to many facets, such as work, leisure, health, university access, and research activity. I’ll borrow a bit from several of them—with great, if indirectly acknowledged, thanks—in considering the various factors that contribute to research productivity and policy issues.

How many retirees continue their scholarly research? I made a casual sampling of five retired faculty members from each of five organizations: the MIT Sloan School and Department of Mechanical Engineering, the Georgia Tech Departments of Chemical & BioEngineering and Physics, and the Stanford University School of Engineering. A search in the Web of Science for a recent 1.5-year period turned up publications by 20% of them. More broadly, estimates in the literature suggest that up to about half of recently retired faculty remain active in research, teaching, or both. Perhaps not surprisingly, retired faculty tend to be more engaged in academic activities for the first 10 years or so after retirement, tailing off after that.

Here are factors that I believe affect retiree research opportunities:

  • Retirement age, early retirement options, and phased retirement possibilities. Mandatory retirement in higher education has been banned in the United States since 1994, and retirement experiences vary widely across the nation’s campuses.
  • Availability of facilities, such as whether retired faculty are allowed to maintain office space or lab access.
  • Insurance coverage that enables retirees to continue lab work.
  • Infrastructure and administrative support, including computing and Internet access, software and licensed database access, remote library access, and grant administration services.
  • Allowability of pay for retiree research, and limits on such pay.
  • Direct encouragement of retiree research. This may take such forms as providing university grants to retirees for travel or student research assistantships, developing a website presence and recognition for emeritus faculty, and fostering retirees’ ongoing engagement by recruiting their participation in research center or departmental brown-bag seminar series. Help in this area may be available through cooperation with the Association of Retirement Organizations in Higher Education, whose membership includes some 50 major U.S. universities.

Areas for exploration

So what policy options does my case of post-retirement research boosterism and a reading of the literature raise for university administrators? Here I identify five areas for further exploration:

1. The word “retirement” conveys the idea of ceasing one’s prior work activity. Should universities allow retirees to continue research? If so, how so? Can they advise Ph.D. students, serve as PIs on grants, maintain lab facilities? I think the answers should generally be “yes.” But some institutions still favor “clear out your desk” retirement.

2. University administrators should consider formally and clearly establishing policies for supporting retiree research. Appointing faculty committees to examine the issues may prove valuable here. Among the questions to be considered: Does the university provide an institutional base for ongoing retiree research? If so, what is provided across the university and what through individual units? With what conditions and restrictions, and for whom? And for how long?

3. Central administration and accounting units should address issues associated with the attendant costs and benefits of having retirees continue to conduct research. For example, who will pay for a retiree’s computer support, and who will accrue overhead on grants received? On the flip side, my case suggests that facilitating retiree research can provide highly favorable benefit/cost ratios. Universities would be well advised to crunch the numbers thoroughly and pay heed to the results.

4. There may be a critical divide between retired faculty who will need physical facilities, such as lab space and equipment, and those who don’t. But even as universities may find it easier to accommodate the needs of faculty who don’t need lots of infrastructure, at least as policies begin to unfold, they can continually look for ways to provide support elements—I prefer not to consider them “privileges”—to make it easier for both camps to remain engaged.

5. Whatever retirement research policies are determined, universities really need to communicate them to everyone, including those faculty considering retirement (early or otherwise).

A key mission of universities is to generate new knowledge. Enabling a great human capital resource—retired faculty and staff—to contribute to that mission seems wise. Not doing so strikes me as wrongheaded. And other faculty appear to agree, because surveys find a significant number of retired faculty lamenting restrictions on their access to university resources needed to continue their scholarship. Rising life expectancies may only amplify the interest and the payoffs for universities, for society, and fundamentally for retirees who still find great life fulfillment through continuing their scholarly pursuits.

Aiding “good luck”

Given the potential rewards from retiree research, what support should universities provide? Returning to my personal experiences, fostering ongoing collegial interaction seems paramount, especially staying connected with potential collaborators. My case touches on several means to enhance collaboration, including having international graduate students visit for a year during the course of their studies. Georgia Tech has been supportive of that by moving to establish policies on background and language proficiency checks and providing support in obtaining visas, among other helpful measures. Ongoing interaction with grad students can benefit both them and the retired professor.

Academic research relies on funding. In my case, NSF is the main supporter, so I’m very appreciative that competition for funding is open to retired faculty. Policy options span a gamut. One possibility to consider would be set-asides for retired faculty support, perhaps small grants within programs to support conference presentations, travel grants, or whatever. Or special funding could be designated to facilitate collaboration between retired and active faculty at different universities. A variant would be to support emeritus faculty who mentor or collaborate (or both) with junior faculty. Drawing on my experiences, the provision of modest support to encourage visiting Ph.D. students spending a year with a retired faculty member as mentor can pay off nicely for both. At the opposite extreme, funders could preclude retired faculty from acting as PIs (but I hope they don’t).

Beyond specific actions, an overarching message is that the future should not be left to chance. My tale contains a happy confluence of factors that has brought me much satisfaction, enabled active research, and returned value to my university (and to the taxpayers who ultimately provided the federal funding dollars). I lucked out; my choices (especially early retirement) were made pretty casually, without careful consideration of ongoing research means and ends. Better for universities to spell out options so that faculty can plan wisely, and I think those options should be weighted to encourage “active retirement.”

More attention should also be paid to faculty and staff retirement issues writ broadly, reaching beyond the research environment. Literature addressing faculty retirement finds a lamentable lack of information for, fairness toward, and sensibility about, faculty retirees who want to stay involved. Much could be offered to make retirement more attractive at modest cost. The disparity between detailed policies and procedures for the active faculty and the dearth thereof for retired faculty warrants protest and action.

A major concern for universities and the research enterprise more broadly is to expand opportunities for young Ph.D.s for research and full faculty positions. Although, as I’ve suggested, the issue certainly requires more exploration and discussion, one obvious way to create those opportunities is to make semi-retirement attractive and rewarding for the graying faculty. Encourage us to retire! Our productivity may even go up, as we take advantage of greater flexibility in pursuing not just our research but life satisfaction, while making more room for faculty positions for younger generations.

Alan L. Porter (alan.porter@isye.gatech.edu) is professor emeritus, industrial and systems engineering, and public policy, at Georgia Tech.

Perspectives: The True Grand Challenge for Engineering: Self-Knowledge

CARL MITCHAM

In 2003, the National Academy of Engineering (NAE) published A Century of Innovation celebrating “20 engineering achievements that transformed our lives” across the 20th century, from automobiles to the Internet. Five years later, it followed up with 14 Grand Challenges for engineering in the 21st century, including making solar energy affordable, providing energy from fusion, securing cyberspace, and enhancing virtual reality. But only the most cursory mention was made of the greatest challenge of all: cultivating deeper and more critical thinking, among engineers and nonengineers alike, about the ways engineering is transforming how and why we live.

What Percy Bysshe Shelley said about poets two centuries ago applies even more to engineers today: They are the unacknowledged legislators of the world. By designing and constructing new structures, processes, and products, they are influencing how we live as much as any laws enacted by politicians. Would we ever think it appropriate for legislators to pass laws that could transform our lives without critically reflecting on and assessing those laws? Yet neither engineers nor politicians deliberate seriously on the role of engineering in transforming our world. Instead, they limit themselves to celebratory clichés about economic benefit, national defense, and innovation.

Where might we begin to promote more critical reflection in our engineered lives? One natural site would be engineering education. In this respect, it is again revealing to note the role of the NAE Grand Challenges. Not just in the United States, but globally as well, the technical community is concerned about the image of engineering in the public sphere and its limited attractiveness to students. The 2010 United Nations Educational, Scientific and Cultural Organization study Engineering: Issues, Challenges and Opportunities for Development lamented that despite a “growing need for multi-talented engineers, the interest in engineering among young people is waning in so many countries.” The Grand Challenges have thus been deployed in the Grand Challenges Scholars Program as a way to attract more students to the innovative life. But to adapt the title of Vannevar Bush’s Science Is Not Enough, a cultivated enthusiasm for engineering is insufficient. More pointedly, to paraphrase Socrates, “The unexamined engineering life is not worth living.” More than once in dialogue with Greek fellow citizens who boasted of their prowess in meeting challenges, Socrates referenced the words inscribed on the Temple of Apollo at Delphi: Know thyself. It is a motto that engineers—and all of us whose lives are informed by engineering—could well apply to ourselves.

An axial age

In a critical reflection on world history, the German philosopher Karl Jaspers observed how in the first millennium BCE, human cultures in Asia and Europe independently underwent a profound transformation that he named the Axial Age. Thinkers as diverse as Confucius, Laozi, Buddha, Socrates, and the Hebrew prophets began to ask what it means to be human. Humans no longer simply accepted whatever ways of life they were born into; they began to subject their cultures to critical assessment. Today we are entering a new Axial Age, one in which we no longer simply accept the physical world into which we are born. But engineering makes almost no effort to give engineers—or any of the rest of us—the tools to reflect on themselves and their world-transforming enterprise.

Engineering programs like to promote innovation in product creation, and to some extent in pedagogy, yet almost never in critical thinking about what it means to be an engineer. Surely the time has come for engineering schools to become more than glorified trade schools whose graduates can make more money than the hapless English majors whom Garrison Keillor lampoons on A Prairie Home Companion. How about engineers who can think holistically and critically about their own role in making our world and assist their nonengineering fellow citizens as well in thinking that goes beyond superficial promotions of the new? And where might engineers acquire some tools with which to cultivate such abilities? One place to start would be through engagement with the traditions of thought and critical self-reflection that emerged from the original Axial Age: what we now call the humanities.

Two cultures recidivus

To mention engineering and the humanities in the same sentence immediately calls to mind C. P. Snow’s famous criticism of those “natural Luddites” who do not have the foggiest notion about such technical basics as the second law of thermodynamics. Do historians, literary scholars, and philosophers really know anything that can benefit engineers?

Snow’s “two cultures” argument, as well as many discussions since, conflates science and engineering. The powers often attributed to science, such as the ability to overcome poverty through increased production of goods and to send people to the Moon by spaceship construction, belong more to engineering. As a result, there are actually two two-culture issues. The tension between two forms of knowledge production (sciences and the humanities) is arguably less significant than another between designing and constructing the world versus reflecting on what it means (engineering and the humanities).

Indeed, although there is certainly room for improvement on the humanities side, I venture that a majority of humanities teachers in engineering schools today could pass the test Snow proposed to the literary intellectuals he skewered. Yet in my experience relatively few engineers, when invited to reflect on their professions, can do much more than echo libertarian appeals to the need for unfettered innovation to fuel endless growth. Even the more sophisticated commentators on engineering such as Samuel Florman (The Existential Pleasures of Engineering), Henry Petroski (To Engineer Is Human), and Billy Vaughn Koen (Discussion of the Method: Conducting the Engineer’s Approach to Problem Solving) are largely absent from engineering curricula.

The two-cultures problem in engineering schools is distinctive. It concerns how to infuse into engineering curricula the progressive humanities and qualitative social sciences, as pursued by literary intellectuals who strive to make common cause with that minority of engineers who are themselves critical of the cultural captivity of techno-education. There are, for instance, increasing efforts to develop programs in humanitarian engineering, service learning, and social justice. Nevertheless, having taught in three engineering schools, I—like many humanities scholars who teach engineering students—experience a continuing tension between engineering and the humanities. Such is especially the case today, in an increasingly corporatized environment at an institution oriented toward the efficient throughput of students who can serve as handmaids of an expanding energy industry.

On the one side, engineering faculty (administrators even more so) have a tendency to look on humanities courses as justified only insofar as they provide communication skills. They want to know the cash value of humanities courses for professional success. The engineering curriculum is so full that they feel compelled to limit humanities and social science requirements, commonly to little more than a semester’s worth, spread over an eight-semester degree program crammed with science and engineering.

Unlike professional degrees in medicine or law, which typically require a bachelor’s degree of some sort before professional focus, entry into engineering is via the B.S. degree alone. This has undoubtedly been one feature attracting many students who are the first members of their families to attend college. It is an upward-mobility degree, even if there is not quite the demand for engineers that the engineering community often proclaims.

Why humanities?

On the other side, humanities faculty (there are seldom humanities administrators with any influence in engineering schools) struggle to justify their courses. These justifications are of three unequal types, taking an instrumental, enhanced instrumental, and intrinsic-value approach.

The first, default appeal is to the instrumental value of communication skills. Engineers who cannot write or otherwise communicate their work are at a disadvantage, not only in abilities to garner respect from people outside the engineering community but even within technical work teams. The humanities role in teaching critical thinking is an expanded version of this appeal. All engineers need to be critical thinkers when analyzing and proposing design solutions to technical problems. But why no critical thinking about the continuous push for innovation itself? Too often, the humanities are simply marshalled to provide rhetorical skills for jumping aboard the more-is-better innovation bandwagon—or criticized for failing to do so.

A second, enhanced instrumental appeal stresses how humanities knowledge, broadly construed to include the qualitative social sciences, can help engineers manage alleged irrational resistance to technological innovation from the nonengineering world. This enhanced instrumental appeal argues that courses in history, political science, sociology, anthropology, psychology, and geography—perhaps even in literature, philosophy, and religion—can locate engineering work in its broader social context. Increasingly engineers recognize that their work takes place in diverse sociocultural situations that need to be negotiated if engineering projects are to succeed.

In similar ways, engineering practice can itself be conceived as a techno-culture all its own. The interdisciplinary field of science, technology, and society (STS) studies receives special recognition here. Many interdisciplinary STS programs arose inside engineering schools, and even after their transformation to disciplinary science and technology studies, some departments have remained closely connected to engineering faculties.

The enhanced instrumental appeal further satisfies ABET (the new acronym name for what used to be the Accreditation Board for Engineering and Technology) requirements. In order to be ABET-accredited, engineering programs must be structured around 11 student outcomes. Central to these outcomes are appropriate mastery of technical knowledge in mathematics and the sciences, including the engineering sciences, and the practices of engineering design, including abilities “to identify, formulate, and solve engineering problems” and “to function on multidisciplinary teams.” Engineers further need to learn how to design products, processes, and systems “to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability” and possess “the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context.” Finally, engineering students should be taught “an ability to communicate effectively” and “professional and ethical responsibility.” Clearly the humanities need to be enrolled in the process of delivering the more fuzzy of these outcomes.

The challenge of professional ethical responsibility deserves highlighting. It is remarkable how, although professional engineering codes of ethics identify the promotion of public safety, health, and welfare as primary obligations, the engineering curriculum shortchanges these key concepts. There exists a field termed safety engineering but none called health or welfare engineering. And even if there were, because the promotion of these values is an obligation for all engineers, their examination would need to be infused across the curriculum. Physicians, who also have a professional commitment to the promotion of health, have to deal with the meaning of this concept in virtually every course they take in medical school.

The 2004 NAE report on The Engineer of 2020: Visions of Engineering in the New Century emphasized that engineering education needs to cultivate not just analytic stills and technical creativity but communication skills, management leadership, and ethical professionalism. Meeting almost any of the subsequent NAE list of Grand Challenges, many engineers admit, will require extensive social context knowledge from the humanities and social sciences. The humanities are accepted as providing legitimate if subordinate service to engineering professionalism even as they are regularly shortchanged in engineering schools.

But it is a third, less instrumental justification for the humanities in engineering education that will be most important for successfully engaging the ultimate Grand Challenge of self-knowledge, that is, of thinking reflectively and critically about the kind of world we wish to design, construct, and inhabit in and through our technologies. The existential pleasures of engineering, not to mention its economic benefits, are limited. Human beings are not only geeks and consumers. They are also poets, artists, religious believers, citizens, friends, and lovers in various degrees all at the same time. The engineering curriculum should be more than an intensified vocational program that assumes students either are, or should become, one-dimensional in their lives. Engineers, like all of us, should be able to think about what it means to be human. Indeed, critical reflection on the meaning of life in a progressively engineered world is a new form of humanism appropriate to our time—a humanities activity in which engineers could lead the way.

Re-envisioning engineering

Primarily aware of requirements for graduation, engineering students are seldom allowed or encouraged to pursue in any depth the kind of humanities that could assist them, and all of us, in thinking about the relationship between engineering and the good life. They sign up for humanities classes on the basis of what fits their schedules, but then sometimes discover classes that not only provide relief from the forced march of technical work but that broaden their sense of themselves and stimulate reflection on what they really want to do with their lives. A few months ago a student in an introduction to philosophy class told me he was tired of engineering physics courses that always had to solve practical problems. He wanted to think about the nature of reality.

If he drops out of engineering, as some of my students have done, the humanities are likely to be blamed, rather than credited with expanding a sense of the world and life. The cost/benefit assessment model in colleges today is progressively coarsening the purpose of higher education. As Clark University psychologist Jeffrey Arnett argues, emerging adulthood is a period of self-discovery during which students can explore different paths in love and work. It took me seven years and three universities to earn my own B.A., years that were in no way cost/benefit-negative. Bernie Machen, president of the University of Florida, has been quoted (in the Chronicle of Higher Education) as telling students that their “time in college remains the single-best opportunity … to explore who you are and your purpose in life.” Engineering programs, because of their rigorous technical requirements, tend to be the worst offenders at cutting intellectual exploration short. This situation needs to be reversed, in the service of both engineering education and of our engineered world. If they really practiced what they preached about innovation, engineering schools would lead the way with expanded curricula and even B.A. degrees in engineering.

In physicist Mark Levinson’s insightful documentary film Particle Fever, the divide between experimentalists and theorists mirrors that between engineering and the humanities. But in the case of the Large Hadron Collider search for the Higgs’ boson chronicled in the film, the experimentalists and theorists work together, insofar as theorists provide the guidance for experimentation. Ultimately, something similar has to be the case for engineering. Engineering does not provide its own justification for transforming the world, except at the unthinking bottom-line level, or much guidance for what kind of world we should design and construct. We wouldn’t think of allowing our legislators to make laws without our involvement and consent; why are we so complacent about the arguably much more powerful process of technical legislation?

As mentioned, what Jaspers in the mid-20th century identified as an Axial Age in human history—one in which humans began to think about what it means to be human—exists today in a new form: thinking about what it means to live in an engineered world. In this second Axial Age, we are beginning to think about not just the human condition but what has aptly been called the techno-human condition: our responsibility for a world, including ourselves, in which the boundaries dissolve between the natural and the artificial, between the human and the technological. And just as a feature of the original Axial Age was learning to affirm limits to human action—not to murder, not to steal—so we can expect to learn not simply to affirm engineering prowess but to limit and steer our technological actions.

Amid the Grand Challenges articulated by the NAE there must thus be another: The challenge of thinking about what we are doing as we turn the world into an artifact and the appropriate limitations of this engineering power. Such reflection need not be feared; it would add to the nobility of engineering in ways that little else could. It is also an innovation within engineering in which others are leading the way. The Netherlands, for instance (not surprisingly, as the country that, given its dependence on the Deltawerken, comes closest to being an engineered artifact), has the strongest community of philosophers of engineering and technology in the world, based largely at the three technological universities of Delft, Eindhoven, and Twente and associated with the 3TU Centre for Ethics and Technology. China, which is undergoing the most rapid engineering transformation in world history, is also a pioneer in this field. The recent 20th-anniversary celebration of the Chinese Academy of Engineering included extended sessions on the philosophy of engineering and technology. Is it not time for the leaders of the engineering community in the United States, instead of fear-mongering about the production of engineers in China, to learn from China—and to insist on a deepening of our own reflections? The NAE Center for Engineering, Ethics, and Society is a commendable start, but one too little appreciated in the U.S. engineering education world, and its mandate deserves broadening and deepening beyond ethical and social issues.

The true Grand Challenge of engineering is not simply to transform the world. It is to do so with critical reflection on what it means to be an engineer. In the words of the great Spanish philosopher José Ortega y Gasset, in the first philosophical meditation on technology, to be an engineer and only an engineer is to be potentially everything and actually nothing. Our increasing engineering prowess calls upon us all, engineers and nonengineers alike, to reflect more deeply about who we are and what we really want to become.

Carl Mitcham (cmitcham@mines.edu) is professor of Liberal Arts and International Studies at the Colorado School of Mines and a member of the adjunct faculty at the European Graduate School in Saas-Fee, Switzerland.

Editor’s Journal: Science: Too Big for Its Britches?

KEVIN FINNERAN

Science ain’t what it used to be, except perhaps in the systems we have for managing it. The changes taking place are widely recognized. The enterprise is becoming larger and more international, research projects are becoming more complex and research teams larger, university-industry collaboration is increasing, the number of scientific journals and research papers published is growing steadily, computer modeling and statistical analysis are playing a growing role in many fields, interdisciplinary teams are becoming more numerous and more heterogeneous, and the competition for finite resources and for prime research jobs is intensifying.

Many of these trends are the inevitable result of scientific progress, and many of them are actually very desirable. We want to see more research done around the world, larger and more challenging problems studied, more science-enabled innovation, more sharing of scientific knowledge, more interaction among disciplines, better use of computers, and enough competition to motivate scientists to work hard. But this growth and diversification of activities is straining the existing management systems and institutional mechanisms responsible for maintaining the quality and social responsiveness of the research enterprise. One undesirable trend has been the growth of attention in the popular press to falsified research results, abuse of human and animal research subjects, conflict of interest, the appearance of irresponsible journals, and complaints about overbuilt research infrastructure and unemployed PhDs. One factor that might link these diverse developments is the failure of the management system to keep pace with the changes and growth in the enterprise.

The pioneering open access journal PLOS ONE announced in June 2014 that after seven and a half years of operation it had published 100,000 articles. There are now tens of thousands scientific journals, and more than 1 million scientific papers will be published in 2014. Maintaining a rigorous review system and finding qualified scientists to serve as reviewers is an obvious challenge, particularly when senior researchers are spending more time writing proposals because constrained government spending has caused rates of successful funding to plummet in the United States.

Craig Mundie, the former chief research and strategy officer at Microsoft and a member of the President’s Council of Advisers on Science and Technology (PCAST), has voiced his concern that the current review system is not designed to meet the demands of today’s data-intensive science. Reviewers are selected on the basis of their disciplinary expertise in particle physics or molecular biology, when the quality of the research actually hinges on the design and use of the computer models. He says that we cannot expect scholars in those areas to have the requisite computer science and statistics expertise to judge the quality of the data analysis.

Data-intensive research introduces questions about transparency and the need to publish results of every experiment. Is it necessary to publish all the code of the software used to conduct a big data search and analysis? If a software program makes it possible to quickly conduct thousands of runs with different variables, is it necessary to make the results of each run available? Who is responsible for maintaining archives of all data generated in modeling experiments? Many scientists are aware of these issues and have been meeting to address them, but they are still playing catch-up with fast-moving developments.

In the past several decades the federal government’s share of total research funding fell from roughly 2/3 to 1/3, and industry now provides about 2/3. In this environment it is not surprising that university researchers seek industry support. It is well understood that researchers working in industry do not publish most of their work because it has proprietary value to the company, but the ethos of university researchers is based on openness. In working with industry funders, university researchers and administrators need the knowledge and capacity to negotiate agreements that preserve this principle.

About 1/3 of the articles being published by U.S. scientists have a coauthor from another country, which raises questions about inconsistencies in research and publishing procedures. Countries differ in their practices such as citing references on proposals, attributing paraphrases of text to its original source, listing lab directors as authors whether or not they participated in the research. Failure to understand these differences can lead to inadequate review and oversight. Similar differences in practice exist across disciplines, which can lead to problems in interdisciplinary research.

Globalization is also evident in the movement of students. The fastest growing segment of the postdoctoral population is comprised of people who earned their PhDs in other countries. Although they now comprise more than half of all postdocs, the National Science Foundation tracks the career progress only of people who earned their PhDs in the United States. We thus know little about the career trajectories of the majority of postdocs. It would be very useful to know why they come to the United States, how they evaluate their postdoctoral experience, and what role they ultimately play in research. This could help us answer the pressing question of the extent to which the postdoctoral is serving as a useful career-development step or whether its primary function is to provide low-cost research help to principal investigators.

The scientific community has fought long and hard to preserve the power to manage its own affairs. It wants scientists to decide which proposals deserve to be funded, what the rules for transparency and authorship should be in publishing, what behavior constitutes scientific misconduct and how it should be punished, and who should be hired and promoted. In general it has used this power wisely and effectively. Public trust is higher in science than in almost any other profession. Although science funding has suffered in the recent period of federal budget constraint, it has fared better than most areas of discretionary spending.

Still, there are signs of concern. The October 19, 2013, Economist carried a cover story on “How science goes wrong,” identifying a range of problems with the current scientific enterprise. Scientists themselves have published articles that question the reproducibility of much research and that note worrisome trends in the number of articles that are retracted. A much-discussed article in the Proceedings of the National Academy of Sciences by scientific superstars Harold Varmus, Shirley Tilghman, Bruce Alberts, and Howard Kutcher highlighted serious problems in biomedical research and worried about overproduction of PhDs. Members of Congress are making a concerted effort to influence NSF funding of the social sciences, and climate change deniers would jump at the opportunity to influence that portfolio. And PCAST held a hearing at the National Academies to further explore problems of scientific reproducibility.

Because its management structure and systems have served science well for so long, the community is understandably reluctant to make dramatic changes. But we have to recognize that these systems were designed for a smaller, simpler, and less competitive research enterprise. We should not be surprised if they struggle to meet the demands of a very different and more challenging environment. For research to thrive, it requires public trust. Maintaining that trust will require that the scale and nature of management match the scale and nature of operations.

We all take pride in the increasingly prominent place that science holds in society, but that prominence also brings closer scrutiny and responsibility. The Internet has vastly expanded our capacity to disseminate scientific knowledge, and that has led many people to know more about how research is done and decisions are made. In rethinking how science is managed and preserving its quality, the goal is not to isolate science from society. We build trust by letting people see how rigorously the system operates and by listening to their ideas about what they want and expect from science. The challenge is to craft a management system that is adequate to deal with the complexities of the evolving research enterprise and also sufficiently transparent and responsive to build public trust.

Saturday Night Live once did a mock commercial for a product called Shimmer. The wife exclaimed, “It’s a floor wax.” The husband bristled, “No, it’s a dessert topping.” After a couple of rounds, the announcer interceded: “You’re both right. It’s a floor wax and a dessert topping.” Fortunately, the combination of scientific rigor and social responsiveness is not such an unlikely merger.

From the Hill

Budget discussions inch forward

Congress returned to Washington in September to do a little business before heading home to campaign. As usual at this time of year, there’s still quite a bit of work to do to complete the budget process for fiscal year (FY) 2015, which begins October 1. Senate Appropriations Chair Barbara Mikulski (D-MD) remains interested in a September omnibus bill that would package all or several bills into one, but the odds seem to favor the House Republicans’ preference for a continuing resolution until the new Congress takes office.

With all this uncertainty, it’s hard to say when appropriations will be finalized and what they will be. Nevertheless, enough discussion and preliminary action have taken place to provide a general picture of congressional preferences for R&D funding in FY 2015. The Senate committees have prepared budgets for the six largest R&D spending bills, which account for 97% of all federal R&D, but none of these budgets have cleared the full Senate. House committees have prepared budgets for all the major categories except the Labor, Health and Human Services (HHS), and Education bill [including the National Institutes of Health (NIH)]. The full House has approved the Defense (DOD); Energy and Water; and Commerce, Justice, and Science [which includes the National Science Foundation (NSF), national Aeronautics and Space Administration, National Institute of Standards and Technology, and National Oceanic and Atmospheric Administration] appropriations bills.

So far, according to AAAS estimates, current House R&D appropriations, which do not include NIH, would result in a 0.8% increase from FY 2014 in nominal dollars; current Senate appropriations for the same agencies would result in just a 0.1% increase. With the Labor-HHS bill included, the Senate appropriation would result in a 0.7% increase. All of these figures would be reductions in constant dollars.

Most R&D spending has followed essentially the same trajectory in recent years. After a sharp decline with sequestration in FY 2013, budgets experienced at least a partial recovery in FY 2014 and seem likely to have a small inflation-adjusted decline in FY 2015. There has been some notable variation. Funding for health, environmental, and education research has made less progress in returning to pre-sequester levels. Defense science and technology (S&T) spending neared pre-sequester levels in FY 2014 but seems likely to fall short of that mark in FY 2015. Downstream technology development funding at DOD would remain well below FY 2012 levels.

In the aggregate, FY 2015 R&D appropriations are not terribly far apart in the House and Senate. This is a departure from what happened in developing the FY 2014 budget, when the House and Senate differed on overall discretionary spending levels. This difference led to large discrepancies in R&D appropriations. The conflict over discretionary spending was resolved in last December’s Bipartisan Budget Act, and this agreement has led to the relatively similar R&D appropriations being produced by each chamber for FY 2015.

This is consistent with the idea that the primary determinant of the R&D budget is the size of the overall discretionary budget. However, it is also worth noting that the very modest nominal increase in aggregate R&D spending would still be larger than the 0.2% nominal growth projected for the total discretionary budget. Indeed, R&D in the five major nondefense bills listed above would generally beat this pace by a clear margin in both chambers, suggesting that appropriators with limited fiscal flexibility have prioritized science and innovation to some extent.

Under current appropriations, federal R&D would continue to stagnate as a share of the economy, as it would under the president’s original budget request (excluding the proposed but largely ignored Opportunity, Growth, and Security Initiative). Federal R&D, which represented 1.04% of gross domestic product (GDP) in FY 2003 at the end of the NIH budget doubling, is now below 0.8%. Both current appropriations and the president’s request would place it at about 0.75% of GDP in FY 2015. Research alone, excluding development, has declined from 0.47% of GDP in FY 2003 to 0.39% today, and current proposals would take it a bit lower, to about 0.37%.

Even though final decisions for FY 2015 appropriations are still some months away, agencies are already at work on their budget proposals for FY 2016. The administration released a set of memos outlining science and technology (S&T) priorities for the FY 2016 budget, due in February. Priorities include: advanced manufacturing; clean energy; earth observations; global climate change; information technology and high-performance computing; innovation in life sciences, biology, and neuroscience; national and homeland security; and R&D for informed policymaking and management.

Congress tackles administrative burden

In response to a March 2014 National Science Board (NSB) report on how some federal rules and regulations were placing an unnecessary burden on research institutions, the House Science, Space, and Technology Committee’s oversight and research panels held a joint hearing on June 12 on Reducing Administrative Workload for Federally Funded Research. The witnesses, including Arthur Bienenstock, the chairman of the NSB’s Task Force on Administrative Burdens; Susan Wyatt Sedwick, the president of the Federal Demonstration Partnership (FDP) Foundation; Gina Lee-Glauser, the vice president of research at Syracuse University; and Allison Lerner, the inspector general of NSF, represented stakeholders affected by changes in the oversight of federally funded research.

Concern over investigators’ administrative burdens began in 2005 when an FDP report revealed that federally funded investigators spend an average of 42% of their time on administrative tasks, dealing with a panoply of regulations in areas such as conflict of interest, research integrity, human subjects protections, animal care and use, and disposal of hazardous wastes. Despite federal reform efforts, in 2012 the FDP found that the average time spent on “meeting requirements rather than conducting research” remained at 42%. In response, the NSB convened a task force charged with investigating this issue and developing recommendations for reform.

On March 29, 2013, the task force issued a request for information (RFI) in the Federal Register, inviting “principal investigators with Federal research funding … to identify Federal agency and university requirements that contribute most to their administrative workload and to offer recommendations for reducing that workload.” The task force used responses from the RFI and information collected at three roundtables with investigators and administrators to write its report.

During the June hearing, the witnesses discussed the report’s recommendations. The four main recommendations were for policymakers to focus on the science, eliminate or modify ineffective regulations, harmonize and streamline requirements, and increase university efficiency and effectiveness.

Bienenstock of the NSB spoke about the report’s tangible suggestions, which include changing NSF’s proposal guidelines to require in the initial submission only the information necessary to determine whether a research project merits funding, deferring ancillary materials not critical to merit review; adopting a system like the FDP’s pilot project in payroll certification to replace time-consuming and outdated effort reporting; and establishing a permanent high-level interagency committee to address obsolete regulations and discuss new ones.

Sedwick echoed the usefulness of the FDP’s payroll-certification pilot and noted that the FDP is a perfect forum for testing new reporting mechanisms that could lead to a more efficient research enterprise. In her testimony, Lee-Glauser addressed how the ever more competitive funding environment is taking investigators away from their research for increasing periods of time to write grants, and noted that the current framework for regulating research on human subjects is too stringent for the low-risk social and behavioral research being performed at Syracuse University. Inspector General Lerner, championing the auditing process, spoke about the importance of using labor-effort reports to prevent fraud and noted that the Office of Management and Budget is in the process of auditing the FDP’s payroll-certification pilot project to determine its effectiveness and scalability. She also mentioned that even though requiring receipts only for large purchases made with grant money would be less time-consuming, it would not prevent investigators from committing fraud by making many small purchases. Lerner closed by reminding the room that “acceptance of public money brings with it a responsibility to uphold the public’s trust.” In addition to the payroll-certification pilot, a few other changes are in the works that could implement some of the recommendations in the NSB report. Currently, the NSF Division of Integrative Organismal Systems and Division of Environmental Biology are piloting a pre-proposal program that requires only a one-page summary and five-page project description for review.

On July 8, the House addressed the issue with its passage of the Research and Development Efficiency Act (H.R. 5056), a bipartisan bill introduced by Rep. Larry Bucshon (R-IN), which would establish a working group through the administration’s National Science and Technology Council to make recommendations on streamlining federal regulations affecting research.

- Keizra Mecklai

In brief

The House passed several S&T bills in July. These include the Department of Energy Laboratory Modernization and Technology Transfer Act (H.R. 5120), which would establish a pilot program for commercializing technology; a two-year reauthorization (H.R. 5035) for the National Institute of Standards and Technology (NIST), which would authorize funding for NIST at $856 million for FY 2015; the International Science and Technology Cooperation Act (H.R. 5029), which would establish a body under the National Science and Technology Council to coordinate international science and technology cooperative research and training activities and partnerships; the STEM Education Act (H.R. 5031), which would support existing science, technology, engineering, and mathematics (STEM) education programs at NSF and define STEM to include computer science; and the National Windstorm Impact Reduction Act (H.R. 1786) to reauthorize the National Windstorm Impact Reduction Program. The House rejected a modified version of the Securing Energy Critical Elements and American Jobs Act of 2014 (H.R. 1022), which would authorize $25 million annually from FY 2015 to FY 2019 to support a Department of Energy R&D program for energy-critical elements.

Members of the Senate, led by Sen. John D. Rockefeller (D-WV), chair of the Senate Commerce, Science, and Transportation Committee, have released their own America COMPETES reauthorization bill. The bill would authorize significant multiyear funding increases for NSF and NIST, while avoiding the changes to NSF peer review and the cuts to social science funding proposed by the House Science Committee in the Frontiers in Innovation, Research, Science, and Technology (FIRST) Act. With the short legislative calendar, progress on the bill is unlikely in the near term.

On July 25, the House Science, Space, and Technology Committee approved the Revitalize American Manufacturing and Innovation Act (H.R. 2996), which would establish a network of public/private institutes focusing on innovation in advanced manufacturing, involving both industry and academia. The creation of such a network has long been a goal of the administration, and a handful of pilot institutes have already been established. A companion bill (S. 1468) awaits action in the Senate.

On July 16, eight Senators, including Environment and Public Works Committee Ranking Member John Barrasso (R-WY), introduced a companion bill (S. 2613) to the House Secret Science Reform Act (H.R. 4012), which passed the House Science, Space, and Technology Committee along party lines on June 24. The bill would prohibit the Environmental Protection Agency (EPA) from proposing, finalizing, or disseminating regulations or assessments unless all underlying data were reproducible and made publicly available.

On June 26, Sens. Kirsten Gillibrand (D-NY) and Daniel Coats (R-IN) introduced the Technology and Research Accelerating National Security and Future Economic Resiliency (TRANSFER) Act (S. 2551). The legislation, a companion to a House bill (H.R. 2981) originally introduced last year by Reps. Chris Collins (R-NY) and Derek Kilmer (D-WA), would create a funding program within the Small Business Technology Transfer program, “to accelerate the commercialization of federally-funded research.” The grants would support efforts such as proof of concept of translational research, prototype construction, and market research.

The Department of Energy Research and Development Act (H.R. 4869), introduced in the House by Rep. Cynthia Lummis (R-WY) on June 13, would authorize a 5.1% budget increase over the FY 2014 level for the Office of Science and 14.3% cut in the Advanced Research Projects Agency–Energy budget. In the subcommittee’s summary, Section 115 “directs the Director to carry out a program on biological systems science prioritizing fundamental research on biological systems and genomics science and requires the Government Accountability Office (GAO) to identify duplicative climate science initiatives across the federal government. Section 115 limits the Director from approving new climate science-related initiatives unless the Director makes a determination that such work is unique and not duplicative of work by other federal agencies. This section also requires the Director to cease all climate science-related initiatives identified as duplicative in the GAO assessment unless the Director determines such work to be critical to achieving American energy independence.”

Executive actions support Obama’s science agenda

In an effort to circumvent a deadlocked Congress, President Obama has issued a number of executive actions to advance his science policy goals. After the DREAM Act immigration bill stalled in 2012, the President issued the Deferred Action for Childhood Arrivals, which allows undocumented individuals in the United States to become eligible for employment authorization (though not permanent residency) if they were under age 31 on June 15, 2012; arrived in the United States before turning 16 years of age; have lived in the United States since June 15, 2007; and are currently in school or hold a GED or higher degree, among other requirements. This step toward immigration reform may allow undocumented residents with STEM degrees or careers to stay in the country and continue to support the American STEM workforce. Then, in 2013, once again in response to failed legislation and the tragic shooting in Newtown, CT, Obama took action on gun control. Among other things, he lifted what amounted to a ban on federally funded research about the causes of gun violence.

Most recently, the EPA released a proposed rule to reduce carbon emissions by 30% below 2005 levels by 2030, as directed by the President’s executive actions contained in his Climate Action Plan. The rule would allow each state to implement a plan that works best for its economy and energy mix, and has been a source of controversy on Capitol Hill; members of Congress and other stakeholders are already engaged in a heated debate as to whether the EPA has authority (through the Clean Air Act) to regulate greenhouse gas emissions.

Agency updates

On July 23, U.S. Department of Agriculture (USDA) Secretary Tom Vilsack announced the creation of the Foundation for Food and Agricultural Research (FFAR) to facilitate the support of agriculture research through both public and private funding. FFAR, authorized in the 2014 Farm Bill, will be funded at $200 million and must receive matching funds from nonfederal sources when making awards for research.

NIH is teaming up with NSF to launch I-Corps at NIH, a pilot program based on NSF’s Innovation Corps. The program will allow researchers with Small Business Innovation Research and Small Business Technology Transfer (SBIR/STTR) Phase 1 awards, which establish feasibility or proof of concept for technologies that could be commercialized, to enroll in a training program that helps them explore potential markets for their innovations.

In response to the June 16 National Academies report on the National Children’s Study, a plan by NIH to study the health of 100,000 U.S. babies up to age 21, NIH Director Francis Collins decided to put the ambitious study, which has already faced more than a decade of costly delays, on hold. The Academies panel indicated that the study’s hypotheses should be more scientifically robust and that the study would benefit from more scientific expertise and management. It also recommended changes to the subject recruitment process.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

Military Innovation and the Prospects for Defense-Led Energy Innovation

EUGENE GHOLZ

Although the Department of Defense has long been the global innovation leader in military hardware, that capability is not easily applied to energy technology

Almost all plans to address climate change depend on innovation, because the alternatives by themselves—reducing greenhouse gas emissions via the more efficient use of current technologies or by simply consuming less of everything—are either insufficient, intolerable, or both. Americans are especially proud of their history of technology leadership, but in most sectors of the economy, they assume that private companies, often led by entrepreneurs and venture capitalists, will furnish the new products and processes. Unfortunately, energy innovation poses exceptionally severe collective action problems that limit the private sector’s promise. Everyone contributes emissions, but no one contributes sufficient emissions that a conscious effort to reduce them will make a material difference in climate change, so few people try hard. Without a carbon tax or emissions cap, most companies have little or no economic incentive to reduce emissions except as a fortuitous byproduct of other investments. And the system of production, distribution, and use of energy creates interdependencies across companies and countries that limit the ability of any one actor to unilaterally make substantial changes.

In principle, governments can overcome these problems through policies to coordinate and coerce, but politicians are ever sensitive to imposing costs on their constituents. They avoid imposing taxes and costly regulations whenever possible. Innovation presents the great hope to solve problems at reduced cost. In the case of climate change, because of the collective action problems, government will have to lead the innovative investment.

Fortunately, the U.S. government has a track record of success with developing technologies to address another public good. Innovation is a hallmark of the U.S. military. The technology that U.S. soldiers, sailors, and airmen bring to war far outclasses adversaries’. Even as Americans complain about challenges of deploying new military equipment, always wishing that technical solutions could do more and would arrive faster to the field, they also take justifiable pride in the U.S. defense industry’s routine exploitation of technological opportunities. Perhaps that industry’s technology savvy could be harnessed to develop low-emissions technologies. And perhaps the Defense Department’s hefty purse could purchase enough to drive the innovations down the learning curve, so that they could then compete in commercial markets as low-cost solutions, too.

That potential has attracted considerable interest in defense-led energy innovation. In fact, in 2008, one of the first prominent proposals to use defense acquisition to reduce energy demand came from the Defense Science Board, a group of expert advisors to the Department of Defense (DOD) itself. The DSB reported, “By addressing its own fuel demand, DoD can serve as a stimulus for new energy efficiency technologies…. If DoD were to invest in technologies that improved efficiency at a level commensurate with the value of those technologies to its forces and warfighting capability, it would probably become a technology incubator and provide mature technologies to the market place for industry to adopt for commercial purposes.” Various think tanks took up the call from there, ranging from the CNA Corporation (which includes the Center for Naval Analyses) to the Pew Charitable Trusts’ Project on National Security, Energy and Climate. Ultimately, the then–Deputy Assistant to the President for Energy and Climate Change, Heather Zichal, proclaimed her hope for defense-led energy innovation on the White House blog in 2013.

These advocates hope not only to use the model of successful military innovation to stimulate innovation for green technologies but to actually use the machinery of defense acquisition to implement their plan. They particularly hope that the DOD will use its substantial procurement budget to pull the development of new energy technologies. Even when the defense budget faces cuts as the government tries to address its debt problem, other kinds of government discretionary investment are even more threatened, making defense ever more attractive to people who hope for new energy technologies.

The U.S. government has in part adopted this agenda. The DOD and Congress have created a series of high-profile positions that include an Assistant Secretary of Defense for Operational Energy Plans and Programs within the Pentagon’s acquisition component. No one in the DOD’s leadership wants to see DOD investment diverted from its primary purpose of providing for American national security, but the opportunity to address two important policy issues at the same time is very appealing.

The appeal of successful military innovation is seductive, but the military’s mixed experience with high-tech investment should restrain some of the exuberance about prospects for energy innovation. We know enough about why some large-scale military innovation has worked, while some has not, to predict which parts of the effort to encourage defense-led energy innovation are likely to be successful; enough to refine our expectations and target our investment strategies. This article carefully reviews the defense innovation process and its implications for major defense-led energy innovation.

Defense innovation works because of a particular relationship between the DOD and the defense industry that channels investment toward specific technology trajectories. Successes on “nice-to-have” trajectories, from DOD’s perspective, are rare, because the leadership’s real interest focuses on national security. Civilians are well aware of the national security and domestic political risks of even the appearance of distraction from core warfighting missions. When it is time to make hard choices, DOD leadership will emphasize performance parameters directly related to the military’s critical warfighting tasks, as essentially everyone agrees it should. Even in the relatively few cases in which investment to solve the challenges of the energy sector might directly contribute to the military component of the U.S. national security strategy, advocates will struggle to harness the defense acquisition apparatus. But a focused understanding of how that apparatus works will make their efforts more likely to succeed.

42
Jamey Stillings #26, 15 October 2010. Fine art archival print. Aerial view over the future site of the Ivanpah Solar Electric Generating System prior to full commencement of construction, Mojave Desert, CA, USA.

Jamey Stillings

Photographer Jamey Stillings’ fascination with the human-altered landscape and his concerns for environmental sustainability led him to document the development of the Ivanpah Solar Power Facility. Stillings took 18 helicopter flights to photograph the plant, from its groundbreaking in October 2010 through its official opening in February 2014. Located in the Mojave Desert of California, Ivanpah Solar is the world’s largest concentrated solar thermal power plant. It spans nearly 4,000 acres of public land and deploys 173,500 heliostats (347,000 mirrors) to focus the sun’s energy on three towers, creating 392 megawatts of electricity or enough to power 140,000 homes.

The photographs in this series formed the basis for Stillings’ current project, Changing Perspectives on Renewable Energy Development, an aerial and ground-based photographic examination of large-scale renewable energy initiatives in the American West and beyond.

Stillings’ three-decade career spans documentary, fine art, and commissioned projects. Based in Santa Fe, New Mexico, he holds an MFA in photography from Rochester Institute of Technology, New York. His work is in the collections of the Library of Congress, Washington, DC; the Museum of Fine Arts, Houston; and the Nevada Museum of Art, Reno, among others, and has been published in The New York Times Magazine, Smithsonian, and fotoMagazin. His second monograph, The Evolution of Ivanpah Solar, will be published in 2015 by Steidl.

—Alana Quinn

43
Jamey Stillings #4546, 28 July 2011. Fine art archival print. Aerial overview of Solar Field 1 before heliostat construction, looking northeast toward Primm, NV.

How weapons innovation has succeeded

Defense acquisition is organized by programs, the largest and most important of which are almost always focused on developing a weapons system, although sometimes the key innovations that lead to improved weapons performance come in a particular component. For example, a new aircraft may depend on a better jet engine or avionics suite, but the investment is usually organized as a project to develop a fighter rather than one or more key components. Sometimes the DOD buys major items of infrastructure such as a constellation of navigation satellites, but those systems’ performance metrics are usually closely tied to weapons’ performance; for example, navigation improves missile accuracy, essential for modern warfare’s emphasis on precision strike. Similarly, a major improvement in radar can come as part of a weapons system program built around that new technology, as the Navy’s Aegis battle management system incorporated the SPY-1 phased array radar on a new class of ships. To incorporate energy innovation into defense acquisition, the DOD and the military services would similarly add energy-related performance parameters to their programs, most of which are weapons system programs. The military’s focus links technology to missions. Each project relies on a system of complex interactions of military judgment, congressional politics, and defense industry technical skill.

44
Jamey Stillings #8704, 27 October 2012. Fine art archival print. Aerial view showing delineation of future solar fields around an existing geologic formation.

Defense innovation has worked best when customers—DOD and the military services—understand the technology trajectory that they are hoping to pull and when progress along that technology trajectory is important to the customer organization’s core mission. Under those circumstances, the customer protects the research effort, provides useful feedback during the process, adequately (or generously) funds the development, and happily buys the end product, often helping the developer appeal to elected leaders for funding. The alliance between the military customer and private firms selling the innovation can overcome the tendency to free ride that plagues investment in public goods such as defense and energy security.

Demand pull to develop major weapons systems is not the only way in which the United States has innovated for defense, but it is the principal route to substantial change. At best, other innovation dynamics, especially technology-push efforts that range from measured investments to support manufacturing scale-up to the Defense Advanced Research Project Agency’s drive for leap-ahead inventions, tend to yield small improvements in the performance of deployed systems in the military’s inventory. More often, because technological improvement itself is rarely sufficient to create demand, inventions derived from technology-push R&D struggle to find a home on a weapons system: Program offices, which actually buy products and thereby create the demand that justifies building production-scale factories, tend to feel that they would have funded the R&D themselves, if the invention were really needed to meet their performance requirements. Bolting on a new technology developed outside the program also can add technological risk—what if the integration does not work smoothly?—and program managers shun unnecessary risk. The partial exceptions are inventions such as stealth, where the military quickly connected the new technology to high-priority mission performance.

But most technology-push projects that succeed yield small-scale innovations that can matter a great deal at the level of local organizations but do not attract sufficient resources and political attention to change overall national capabilities. In energy innovation, an equivalent example would be a project to develop a small solar panel to contribute to electricity generation at a remote forward operating base, the sort of boon to warfighters that has attracted some attention during the Afghanistan War but that contributes to a relatively low-profile acquisition program (power generation as opposed to, say, a new expeditionary fighting vehicle) and will not even command the highest priority for that project’s program manager (who must remain focused on baseload power generation rather than solar augmentation).

In the more important cases of customer-driven military innovations, military customers are used to making investment decisions based on interests other than the pure profit motive. Defense acquisition requirements derive from leaders’ military judgment about the strategic situation, and the military gets the funding for needed research, development, and procurement from political leaders rather than profit-hungry investors. This process, along with the military’s relatively large purse as compared to even the biggest commercial customers, is precisely what attracts the interest of advocates of defense-led energy innovation: Because of the familiar externalities and collective action problems in the energy system, potential energy innovations often do not promise a rate of return sufficient to justify the financial risk of private R&D spending, but the people who make defense investments do not usually calculate financial rates of return anyway.

A few examples demonstrate the importance of customer preferences in military innovation. When the Navy first started its Fleet Ballistic Missile program, its Special Projects Office had concepts to give the Navy a role in the nuclear deterrence mission but not much money initially to develop and build the Polaris missiles. Lockheed understood that responsiveness was a key trait in the defense industry, so the company used its own funds initially to support development to the customer’s specifications. As a result, Lockheed won a franchise for the Navy’s strategic systems that continues today in Sunnyvale, California, more than 50 years later.

In contrast, at roughly the same time as Lockheed’s decision to emphasize responsiveness, the Curtiss-Wright Corporation, then a huge military aircraft company, attempted to use political channels and promises of great performance to sell its preferred jet engine design. However, Air Force buyers preferred the products of companies that followed the customer’s lead, and Curtiss-Wright fell from the ranks of leading contractors even in a time of robust defense spending. Today, after great effort and years in the wilderness, the company has rebuilt to the stature of a mid-tier defense supplier with a name recognized by most (but not all) defense industry insiders.

When it is time to make hard choices, DOD leadership will emphasize performance parameters directly related to the military’s critical warfighting tasks, as essentially everyone agrees it should.

46
Jamey Stillings #9712, 21 March 2013. Fine art archival print. Aerial view of installed heliostats.

The contrasting experiences of Lockheed and Curtiss-Wright show the crucial importance of following the customer’s lead in the U.S. defense market. Entrepreneurs can bring what they think are great ideas to the DOD, including ideas for great new energy technologies, but the department tends to put its money where it wants to, based on its own military judgment.

Although the U.S. military can be a difficult customer if the acquisition executives lose faith in a supplier’s responsiveness, the military can also be a forgiving customer if firms’ good-faith efforts do not yield products that live up to all of the initial hype, at least for programs that are important to the Services’ core missions. A technology occasionally underperforms to such an extent that a program is cancelled (for example, the ill-fated Sergeant York self-propelled antiaircraft gun of the 1980s) but in many cases, the military accepts equipment that does not meet its contractual performance specifications. The Services then either nurture the technology through years of improvements and upgrades or discover that the system is actually terrific despite failing to meet the “required” specs. The B-52 bomber is perhaps the paradigm case: It did not meet its key performance specifications for range, speed, or payload, but it turned out to be such a successful aircraft that it is still in use 50 years after its introduction and is expected to stay in the force for decades to come. The Army similarly stuck with the Bradley Infantry Fighting Vehicle through a difficult development history. Trying hard and staying friendly with the customer is the way to succeed as a defense supplier, and because the military is committed to seeking technological solutions to strategic problems, major defense contractors have many opportunities to innovate.

This pattern stands in marked contrast to private and municipal government investment in energy infrastructure, where underperformance in the short term can sour investors on an idea for decades. The investors may complete the pilot project, because municipal governments are not good at cutting their losses after the first phase of costs are sunk (though corporations may be more ruthless, for example in GM’s telling of the story of the EV-1 electric car). But almost no one else wants to risk repeating the experience, even if project managers can make a reasonable case that the follow-on project would perform better as a result of learning from the first effort.

And it’s the government—so politicians play a role

Of course, military desire for a new technology is not sufficient by itself to get a program funded in the United States. Strong political support from key legislators has also been a prerequisite for technological innovation. Over the years, the military and the defense contractors have learned to combine performance specifications with political logic. The best way to attract political support is to promise heroic feats of technological progress, because the new system should substantially outperform the equipment in the current American arsenal, even if that previous generation of equipment was only recently purchased at great expense. The political logic simply compounds the military’s tendency for technological optimism, creating tremendous technology pull.

In fact, Congress would not spend our tax dollars on the military without some political payoff, because national security poses a classic collective action problem. All citizens benefit from spending on national defense whether they help pay the cost or not, so the government spends tax dollars rather than inviting people to voluntarily contribute. But taxes are not popular, and raising money to provide public goods is a poor choice for a politician unless he can find a specific political benefit from the spending in addition to furthering the diffuse general interest.

Military innovations’ political appeal prevents the United States from underinvesting in technological opportunities. Sometimes that appeal comes from ideology, such as the “religion” that supports missile defense. Sometimes the appeal comes from an idiosyncratic vision: for example, a few politicians like Sen. John Warner contributed to keeping unmanned aerial vehicle (UAV) programs alive before 9/11, before the War on Terror made drone strikes popular. And sometimes the appeal comes from the ability to feed defense dollars to companies in a legislator’s district. In the UAV case, Rep. Norm Dicks, who had many Boeing employees in his Washington State district, led political efforts to continue funding UAV programs after the end of the Cold War.

47
Jamey Stillings #7626, 4 June 2012. Fine art archival print. Workers install a heliostat on a pylon. Background shows assembled heliostats in “safe” or horizontal mode. Mirrors reflect the nearby mountains.

This need for political appeal presents a major challenge to advocates of defense-led energy innovation, because the political consensus for energy innovation is much weaker than the one for military innovation. Some prominent political leaders, notably Sen. John McCain, have very publicly questioned whether it is appropriate for the DOD to pay attention to energy innovation, which they view as a distraction from the DOD’s primary interest in improved warfighting performance. McCain wrote a letter to the Secretary of the Navy, Ray Mabus, in July 2012, criticizing the Navy’s biofuels initiative by pointedly reminding Secretary Mabus, “You are the Secretary of the Navy, not the Secretary of Energy.” Moreover, although almost all Americans agree that the extreme performance of innovative weapons systems is a good thing (Americans expect to fight with the very best equipment), government support for energy innovation, especially energy innovation intended to reduce greenhouse gas emissions, faces strong political headwinds. In some quarters, ideological opposition to policies intended to reduce climate change is as strong as the historically important ideological support for military investment in areas like missile defense.

48
Jamey Stillings #10995, 4 September 2013. Fine art archival print. Solar flux testing, Solar Field 1.

The defense industry also provides a key link in assembling the political support for military innovation that may be hard to replicate for defense-led energy innovation. The prime contractors take charge of directly organizing district-level political support for the defense acquisition budget. To be funded, a major defense acquisition project needs to fit into a contractor-led political strategy. The prime contractors, as part of their standard responsiveness to their military customers, almost instantly develop a new set of briefing slides to tout how their products will play an essential role in executing whatever new strategic concept or buzzword comes from the Pentagon. And their lobbyists will make sure that all of the right congressional members and staffers see those slides. But those trusted relationships are built on understanding defense technology and on connections to politicians interested in defense rather than in energy. There may be limits to the defense lobbyists’ ability to redeploy as supporters of energy innovation.

49
Jamey Stillings #7738, 4 June 2012. Fine art archival print. View of construction of the dry cooling system of Solar Field 1.

Other unusual features of the defense market reinforce the especially strong and insular relationship between military customers and established suppliers. Their relationship is freighted with strategic jargon and security classification. Military suppliers are able to translate the language in which the military describes its vision of future combat into technical requirements for systems engineering, and the military trusts them to temper optimistic hopes with technological realism without undercutting the military’s key objectives. Military leaders feel relatively comfortable informally discussing their half-baked ideas about the future of warfare with established firms, ideas that can flower into viable innovations as the military officers go back and forth with company technologists and financial officers. That iterative process has given the U.S. military the best equipment in the world in the past, but it tends to limit the pool of companies to the usual prime contractors: Lockheed Martin, Boeing, Northrop Grumman, Raytheon, General Dynamics, and BAe Systems. Those companies’ core competency is in dealing with the unique features of the military customer.

Jargon and trust are not the only important features of that customer-supplier relationship. Acquisition regulations also specify high levels of domestic content in defense products, regardless of the cost; that a certain fraction of each product will be built by small businesses and minority- and women-owned companies, regardless of their ability to win subcontracts in fair and open competition; and that defense contractors will comply with an extremely intrusive and costly set of audit procedures to address the threat of perceived or very occasionally real malfeasance. These features of the defense market cannot be wished away by reformers intent on reducing costs: Each part of the acquisition system has its defenders, who think that the social goal or protection from scandal is worth the cost. The defense market differs from the broader commercial market in the United States on purpose, not by chance. Majorities think that the differences are driven by good reasons.

The implication is that the military has to work with companies that are comfortable with the terms and conditions of working for the government. That constraint limits the pool of potential defense-led energy innovators. It would also hamper the ability to transfer any defense-led energy innovations to the commercial market, because successful military innovations have special design features and extra costs built into their value chain.

In addition to their core competency in understanding the military customer, defense firms, like most other companies, also have technological core competencies. In the 1990s and 2000s, it was fashionable in some circles to call the prime contractors’ core competency “systems integration,” as if that task could be performed entirely independently from a particular domain of technological expertise. In one of the more extreme examples, Raytheon won the contract as systems integrator for the LPD-17 class of amphibious ships, despite its lack of experience as a shipbuilder. Although Raytheon had for years led programs to develop highly sophisticated shipboard electronics systems, the company’s efforts to lead the team building the entire ship contributed to an extremely troubled program. In this example, company and customer both got carried away with their technological optimism and their emphasis on contractor responsiveness. In reality, the customer-supplier relationship works best when it calls for the company to develop innovative products that follow an established trajectory of technological performance, where the supplier has experience and core technical capability. Defense companies are likely to struggle if they try to contribute to technological trajectories related to energy efficiency or reduced greenhouse gas emissions, trajectories that have not previously been important in defense acquisition.

50
Jamey Stillings #11060, 4 September 2013. Fine art archival print. View north of Solar Fields 2 and 3.

That is not to say that the military cannot introduce new technological trajectories into its acquisition plans. In fact, the military’s emphasis on its technological edge has explicitly called for disruptive innovation from time to time, and the defense industry has responded. For example, the electronics revolution involved huge changes in technology, shifting from mechanical to electrical devices and from analog to digital logic, requiring support from companies with very different technical core competencies. Startup companies defined by their intellectual property, though, had little insight (or desire) to figure out the complex world of defense contracting—the military jargon, the trusted relationships, the bureaucratic red tape, and the political byways—so they partnered with established prime contractors. Disruptive innovators became subcontractors, formed joint ventures, or sold themselves to the primes. The trick is for established primes to serve as interfaces and brokers to link the military’s demand pull with the right entrepreneurial companies with skills and processes for the new performance metrics. Recently, some traditional aerospace prime contractors, led by Boeing and Northrop Grumman, have used this approach to compete in the market for unmanned aerial vehicles. Perhaps they could do the same in the area of energy innovation.

What the military customer wants

Given the pattern of customer-driven innovation in defense, the task confronting advocates of defense-driven energy innovation seems relatively simple: Inject energy concerns into the military requirements process. If they succeed, then the military innovation route might directly address key barriers that hamper the normal commercial process of developing energy technologies. With the military’s interest, energy innovations might find markets that promise a high enough rate of return to justify the investment, and energy companies might convince financiers to stick with projects through many lean years and false starts before they reach technological maturity, commercial acceptance, and sufficient scale to earn profits.

The first step is to understand the customers’ priorities. From the perspective of firms that actually develop and sell new defense technologies, potential customers include the military services with their various components, each with a somewhat different level of interest in energy innovation.

Military organizations decide the emphasis in the acquisition budget. They make the case, ideally based on military professional judgment, for the kinds of equipment the military needs most. They also determine the systems’ more detailed requirements, such as the speed needed by a front-line fighter aircraft and the type(s) of fuel that aircraft should use. They may, of course, turn out to be wrong: Strategic threats may suddenly change, some technological advantages may not have the operational benefits that military leaders expected, or other problems could emerge in their forecasts or judgments. Nevertheless, these judgments are extremely influential in defining acquisition requirements. Admitting uncertainty about requirements often delays a program: Projects that address a “known” strategic need get higher priority from military leaders and justify congressional spending more easily.

Not surprisingly, military buyers almost always want a lot of things. When they set the initial requirements, before the budget and technological constraints of actual program execution, the list of specifications can grow very long. Even though the process in principle recognizes the need for tradeoffs, there is little to force hard choices early in the development of a new military technology. Adding an energy-related requirement would not dramatically change the length of the list. But when the real spending starts and programs come up for evaluation milestones, the Services inevitably need to drop some of the features that they genuinely desired. Relevance to the organizations’ critical tasks ultimately determines the emphasis placed on different performance standards during those difficult decisions. Even performance parameters that formally cannot be waived, like those specified in statute, may face informal pressure for weak enforcement. Programs can sometimes get a “Gentleman’s C” that allows them to proceed, subordinating a goal that the buying organization thinks is less important.

Energy technology policy advocates looking for a wealthy investor to transform the global economy probably ask too much of the DOD.

For example, concerns about affordability and interoperability with allies’ systems have traditionally received much more rhetorical emphasis early in programs’ lives than actual emphasis in program execution. When faced with the question of whether to put the marginal dollar into making the F-22 stealthy and fast or into giving the F-22 extensive capability to communicate, especially with allies, the program office not surprisingly emphasized the former key performance parameters rather than the latter nice feature.

Given that military leaders naturally emphasize performance that responds directly to strategic threats, and that they are simultaneously being encouraged by budget austerity to raise the relative importance of affordability in defense acquisition decisions, energy performance seems more likely to end up like interoperability than like stealth in the coming tradeoff deliberations. In a few cases, the energy-related improvements will directly improve combat performance or affordability, too, but those true “win-win” solutions are not very common. If they were, there would be no appeals for special priority for energy innovation.

The recent case of the ADVENT jet engine program shows the difficulty. As the military begins procurement of the F-35 fighter for the Air Force, Navy, and Marine Corps as well as for international sales, everyone agrees that having two options for the engine would be nice. If Pratt & Whitney’s F-135 engine runs into unexpected production or operational problems, a second engine would be available as a backup, and competition between the two engines would presumably help control costs and might stimulate further product improvement. However, the military decided that the fixed cost of paying GE to develop and manufacture a second engine would be too high to be justified even for a market as enormous as the F-35 program. The unofficial political compromise was to start a public-private partnership with GE and Rolls Royce called ADVENT, which would develop the next generation of fighter engine that might compete to get onto F-35 deliveries after 2020. ADVENT’s headline target for performance improvement is a 25% reduction in specific fuel consumption, which would reduce operating costs and, more important, would increase the F-35’s range and its ability to loiter over targets, directly contributing to its warfighting capabilities, especially in the Pacific theater, where distances between bases and potential targets are long. Although this increase in capability seems particularly sensible, given the announced U.S. strategy of “rebalancing” our military toward Asia, the Air Force has struggled to come up with its share of funding for the public/private partnership and has hesitated to prepare for a post-2020 competition between the new engine and the now-established F-135. The Air Force may have enough to worry about trying to get the first engine through test and evaluation, and paying the fixed costs of a future competitor still seems like a luxury in a time of budget constraint. Countless potential energy innovations have much weaker strategic logic than the ADVENT engine, and if ADVENT has trouble finding a receptive buyer, the others are likely to have much more trouble.

Of course, military culture also offers some hopeful points for the energy innovation agenda. For example, even if energy innovation adds complexity to military logistics in managing a mix of biofuels, or generating and storing distributed power rather than using standardized large-capacity diesel generators, the military is actually good at dealing with complexity. The Army has always moved tons of consumables and countless spare parts to the front to feed a vast organization of many different communities (infantry, armor, artillery, aviation, etc.). The Navy’s power projection capability is built on a combination of carefully planning what ships need to take with them with flexible purchasing overseas and underway replenishment. The old saw that the Army would rather plan than fight may be an exaggeration, but it holds more than a grain of truth, because the Army is genuinely good at planning. More than most organizations, the U.S. military is well prepared to deal with the complexity that energy innovation and field experimentation will inject into its routines. Even if the logistics system seems Byzantine and inefficient, the military’s organizational culture does not have antibodies against the complexity that energy innovation might bring.

52
Jamey Stillings #11590, 5 September 2013. Fine art archival print. Solar flux testing, Solar Field 3.

Who will support military-led innovation?

The potential for linking energy innovation to the DOD’s core mission seems especially important and exciting right now, because of the recent experience at war, and even more than that, because the recent wars happen to have involved a type of fighting with troops deployed to isolated outposts far from their home bases, in an extreme geography that stressed the logistics system. But as the U.S. effort in Afghanistan draws down, energy consumption in operations will account for less of total energy consumption, meaning that operational energy innovations will have less effect on energy security. More important, operational energy innovations will be of less interest to the military customers, who according to the 2012 Strategic Guidance are not planning for a repeat of such an extreme situation as the war in Afghanistan. Even if reality belies their expectations (after all, they did not expect to deploy to Afghanistan in 2001, either) acquisition investments follow the ex ante plans, not the ex post reality.

Specific military organizations that have an interest in preparing to fight with a light footprint in austere conditions may well continue the operational energy emphasis of the past few years. The good news for advocates of military demand pull for energy innovation is that special operations forces are viewed as the heroes of the recent wars, making them politically popular. They also have their own budget lines that are less likely to be swallowed by more prosaic needs such as paying for infrastructure at a time of declining defense budgets. While the conventional military’s attention moves to preparation against a rising near-peer competitor in China (a possible future, if not the only one, for U.S. strategic planning), special operations may still want lightweight powerful batteries and solar panels to bring power far off the grid. Even if a lot of special operations procurement buys custom jobs for highly unusual missions, the underlying research to make special operations equipment may also contribute to wider commercial uses such as electric cars and distributed electricity generation, if not to other challenges like infrastructure-scale energy storage and grid integration of small-scale generators.

53
Jamey Stillings #9395, 21 March 2013. Fine art archival print. Sunrise, view to the southeast of Solar Fields 3, 2, and1.

Working with industry for defense-led energy innovation requires treading a fine line. Advocates need to understand the critical tasks facing specific military organizations, meaning that they have to live in the world of military jargon, strategic thinking, and budget politics. At the same time, the advocates need to be able to reach nontraditional suppliers who have no interest in military culture but are developing technologies that follow performance trajectories totally different from those of the established military systems. More likely, it will not be the advocates who will develop the knowledge to bridge the two groups, their understandings of their critical tasks, and the ways they communicate and contract. It will be the DOD’s prime contractors, if their military customers want them to respond to a demand for energy innovation.

Defense really does need some new energy technologies, ranging from fuel-efficient jet engines to easily rechargeable lightweight batteries, and the DOD is likely to find some money for particular technologies. Those technologies may also make a difference for the broader energy economy. But energy technology policy advocates looking for a wealthy investor to transform the global economy probably ask too much of the DOD. Military innovations that turn out to have huge commercial implications—innovations such as the Internet and the Global Positioning System—do not come along very often, and it takes decades before their civilian relatives are well understood and widely available. The military develops these products because of its own internal needs, driven by military judgment, congressional budget politics, and the core competencies of defense-oriented industry.

In a 2014 report, the Pew Project on National Security, Energy and Climate Change blithely discussed the need to “chang[e] the [military] culture surrounding how energy is generated and used….” Trying to change the way the military works drives into the teeth of military and political resistance to defense-led energy innovation. Changing the culture might also undermine the DOD’s ability to innovate; after all, one of the key reasons why Pew and others are interested in using the defense acquisition apparatus for energy innovation is that mission-focused technology development at the DOD has been so successful in the past. Better to focus defense-led energy innovation efforts on projects that actually align with military missions rather than stretching the boundaries of the concept and weakening the overall effort.

Recommended reading

Thomas P. Erhard, Air Force UAVs: The Secret History (Arlington, VA: Mitchell Institute for Airpower Studies, July 2010).

Eugene Gholz, “Eisenhower versus the Spinoff Story: Did the Rise of the Military-Industrial Complex Hurt or Help America’s Commercial Competitiveness?” Enterprise and Society 12, no. 1 (March 2011).

Dwight R. Lee, “Public Goods, Politics, and Two Cheers for the Military-Industrial Complex,” in Robert Higgs, ed., Arms, Politics, and the Economy: Historical and Contemporary Perspectives (New York, NY: Holmes & Meier, 1990), pp. 22–36.

Thomas L. McNaugher, New Weapons, Old Politics: America’s Military Procurement Muddle (Washington, DC: Brookings Institution, 1989).

David C. Mowery, “Defense-related R&D as a model for ‘Grand Challenges’ technology policies,” Research Policy 41, no. 10 (December 2012).

Report of the Defense Science Board Task Force on DoD Energy Strategy: “More Fight–Less Fuel” (Washington, DC: Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics, February 2008).

Harvey M. Sapolsky, Eugene Gholz, and Caitlin Talmadge, US Defense Politics: The Origins of Security Policy (London, UK: Routledge, Revised and Expanded 2nd edition, 2013).

Eugene Gholz (egholz@alum.mit.edu) is an associate professor at the LBJ School of Public Affairs of The University of Texas at Austin.

No Time for Pessimism about Electric Cars

JOHN D. GRAHAM
JOSHUA CISNEY
SANYA CARLEY
JOHN RUPP

The national push to adopt electric cars should be sustained until at least 2017, when a review of fed auto policies is scheduled.

A distinctive feature of U.S. energy and environmental policy is a strong push to commercialize electric vehicles (EVs). The push began in the 1990s with California’s Zero Emission Vehicle (ZEV) program, but in 2008 Congress took the push nationwide through the creation of a $7,500 consumer tax credit for qualified EVs. In 2009 the Obama administration transformed a presidential campaign pledge into an official national goal: putting one million plug-in electric vehicles on the road by 2015.

A variety of efforts has promoted commercialization of EVs. Through a joint rulemaking, the Department of Transportation and the Environmental Protection Agency are compelling automakers to surpass a fleet-wide average of 54 miles per gallon for new passenger cars and light trucks by model year 2025. Individual manufacturers, which are considered unlikely to meet the standards without EV offerings, are allowed to count each qualified EV as two vehicles instead of one in near-term compliance calculations.

The U.S. Department of Energy (DOE) is actively funding research, development, and demonstration programs to improve EV-related systems. Loan guarantees and grants are also being used to support the production of battery packs, electric drive-trains, chargers, and the start-up of new plug-in vehicle assembly plants. The absence of a viable business model has slowed the growth of recharging infrastructure, but governments and companies are subsidizing a growing number of public recharging stations in key urban locations and along some major interstate highways. Some states and cities have gone further by offering EV owners additional cash incentives, HOV-lane access, and low-cost city parking.

Private industry has responded to the national EV agenda. Automakers are offering a growing number of plug-in EV models (three in 2010; seventeen in 2014), some that are fueled entirely by electricity (battery-operated electric vehicles, or BEVs) and others that are fueled partly by electricity and partly by a back-up gasoline engine (plug-in hybrids, or PHEVs). Coalitions of automakers, car dealers, electric utilities, and local governments are working together in some cities to make it easy for consumers to purchase or lease an EV, to access recharging infrastructure at home, in their office or in their community, and to obtain proper service of their vehicle when problems occur. Government and corporate fleet purchasers are considering EVs while cities as diverse as Indianapolis and San Diego are looking into EV sharing programs for daily vehicle use. Among city planners and utilities, EVs are now seen as playing a central role in “smart” transportation and grid systems.

The recent push for EVs is hardly the market-oriented approach to innovation that would have thrilled Milton Friedman. It resembles somewhat the bold industrial policies in the post-World War II era that achieved some significant successes in South Korea, Japan, and China. Although the U.S. is a market-oriented economy, it is difficult to imagine that the U.S. successes in aerospace, information technology, nuclear power, or even shale gas would have occurred without a supportive hand from government. In this article, we make a pragmatic case for stability in federal EV policies until 2017, when a large body of real-world experience will have been generated and when a midterm review of federal auto policies is scheduled.

Laurence Gartel and Tesla Motors

Digital artist Laurence Gartel collaborated with Tesla Motors to transform the electric Tesla Roadster into a work of art by wrapping the car’s body panels in bold colorful vinyl designed by the artist. The Roadster was displayed and toured around Miami Beach during Miami’s annual Art Basel festival in 2010.

Gartel, an artist who has experimented with digital art since the 1970s, was a logical collaborator with Tesla given his creative uses of technology. He graduated from the School of Visual Arts, New York, in 1977, and has pursued a graphic style of digital art ever since. His experiments with computers, starting in 1975, involved the use of some of the earliest special effects synthesizers and early video paint programs. Since then, his work has been exhibited at the Museum of Modern Art; Long Beach Museum of Art; Princeton University Art Museum; MoMA PS 1, New York City; and the Norton Museum of Art, West Palm Beach, Florida. His work is in the collections of the Smithsonian Institution’s National Museum of American History and the Bibliotheque Nationale, Paris.

34
Image courtesy of the artist.

Governmental interest in EVs

The federal government’s interest in electric transportation technology is rooted in two key advantages that EVs have over the gasoline- or diesel-powered internal combustion engine. Since the advantages are backed by an extensive literature, we summarize them only briefly here.

First, electrification of transport enhances U.S. energy security by replacing dependence on petroleum with a flexible mixture of electricity sources that can be generated within the United States (e.g. natural gas, coal, nuclear power, and renewables). The U.S. is making rapid progress as an oil producer, which enhances security, but electrification can further advance energy security by curbing the nation’s high rate of consumption in the world oil market. The result: less global dependence on energy from OPEC producers, unstable regimes in the Middle East, and Russia.

Second, electrification of transport is more sustainable on a life-cycle basis because it causes a net reduction in local air pollution and greenhouse gas emissions, an advantage that is expected to grow over time as the U.S. electricity mix shifts toward more climate-friendly sources such as gas, nuclear, and renewables. Contrary to popular belief, an electric car that is powered by coal-fired electricity is still modestly cleaner from a greenhouse gas perspective than a typical gasoline-powered car. And EVs are much cleaner if the coal plant is equipped with modern pollution controls for localized pollutants and if carbon capture and storage (CCS) technology is applied to reduce carbon dioxide emissions. Since the EPA is already moving to require CCS and other environmental controls on coal-fired power plants, the environmental case for plug-in vehicles will only become stronger over time.

Although the national push to commercialize EVs is less than six years old, there have been widespread claims in the mainstream press, on drive-time radio, and on the Internet that the EV is a commercial failure. Some prominent commentators, including Charles Krauthammer, have suggested that the governmental push for EVs should be reconsidered.

It is true that many mainstream car buyers are unfamiliar with EVs and are not currently inclined to consider them for their next vehicle purchase. Sales of the impressive (and pricy) Tesla sports car (Model S) have been better than the industry expected, but ambitious early sales goals for the Nissan Leaf (a BEV) and the Chevrolet Volt (a PHEV) have not been met. General Electric Corporation backed off of an original pledge to purchase 25,000 EVs. Several companies with commercial stakes in batteries, EVs, or chargers have gone bankrupt, despite assistance from the federal government.

“Early adopters” of plug-in vehicles are generally quite enthusiastic about their experiences, but mainstream car buyers remain hesitant. There is much skepticism in the industry about whether EVs will penetrate the mainstream new-vehicle market or simply serve as “compliance cars” for California regulators or become niche products for taxi and urban delivery fleets.

One of the disadvantages of EVs is that they are currently more costly to produce than comparably sized gasoline and diesel powered vehicles. The cost premium today is about $10,000-$15,000 per vehicle, primarily due to the high price of lithium ion battery packs. The cost disadvantage has been declining over time due to cost-saving innovations in battery-pack design and production techniques but there is a disagreement among experts about how much and how fast production costs will decline in the future.

On the favorable side of the affordability equation, EVs have a large advantage in operating costs: electricity is about 65% cheaper than gasoline on an energy-equivalent basis, and most analysts project that the price of gasoline in the United States will rise more rapidly over time than the price of electricity. Additionally, repair and maintenance costs are projected to be significantly smaller for plug-in vehicles than gasoline vehicles. When all of the private financial factors are taken into account, the total cost of ownership throughout the lifetime of the EV is comparable—or even lower—than a gasoline vehicle, and that advantage can be expected to enlarge as EV technology matures.

Trends in EV sales

Despite the financial, environmental, and security advantages of the EV, early sales have not matched initial hopes. Nissan and General Motors led the high-volume manufacturers with EV offerings but have had difficulty generating sales, even though auto sales in the United States were steadily improving from 2010 through 2013, the period when the first EVs were offered. In 2013 EVs accounted for only about 0.2% of the 16 million new passenger vehicles sold in the U.S.

Nissan-Renault has been a leader. At the 2007 Tokyo Motor Show, Nissan shocked the industry with a plan to leapfrog the gasoline-electric hybrid with a new mass-market BEV, called the Fluence in France and the Leaf in the U.S. Nissan’s business plan called for EV sales of 100,000 per year in the U.S. by 2012, and Nissan was awarded a $1.6 billion loan guarantee by DOE to build a new facility in Smyrna, Tennessee to produce batteries and assemble EVs. The company had plans to sell 1.5 million EVs on a global basis by 2016 but, as of late 2013, had sold only 120,000 and acknowledged that it will fall short of its 2016 global goal by more than 1 million vehicles.

General Motors was more cautious than Nissan, planning production in the US of 10,000 Volts in 2011 and 60,000 in 2012. However, neither target was met. GM did “re-launch” the Volt in early 2012 after addressing a fire-safety concern, obtaining HOV-lane access in California for Volt owners, cutting the base price, and offering a generous leasing arrangement of $350 per month for 36 months of use. Volt sales rose from 7,700 in 2011 to 23,461 in 2012 and 23,094 in 2013.

The most recent full-year U.S. sales data (2013) reveal that the Volt is now the top-selling plug-in vehicle in the U.S., followed by the Leaf (22,610), the Tesla Model S (18,000), and the Toyota Prius Plug-In (12,088). In the first six months of 2014, EV sales are up 33% over 2013, led by the Nissan Leaf and an impressive start from the Ford Fusion Energi PHEV. Although the sales at Tesla have slowed a bit, the company has announced plans for a new $5 billion plant in the southwest of the U.S. to produce up to 500,000 vehicles for distribution worldwide.

President Obama, in 2009 and again in his January 2011 State of the Union address, set the ambitious goal of putting one million plug-in vehicles on the road by 2015. Two years after the address, DOE and the administration dropped the national 2015 goal, recognizing that it was overly ambitious and would take longer to achieve. But does this refinement of a federal goal really prove that EVs are a commercial failure? We argue that it does not, pointing to two primary lines of evidence: a historical comparison of EV sales with conventional hybrid sales; and a cross-country comparison of U.S. EV sales with German EV sales.

Comparison with the conventional hybrid

A conventional hybrid, defined as a gasoline-electric vehicle such as the Toyota Prius, is different from a plug-in vehicle. Hybrids vary in their design, but they generally recharge their batteries during the process of braking (“regenerative braking”) or, if the brakes are not in use during highway cruising, from the power of the gasoline engine. Thus, a conventional hybrid cannot be plugged in for charging and does not draw electricity from the grid.

Cars with hybrid engines are also more expensive to produce than gasoline cars, primarily because they have two propulsion systems. For a comparably-sized vehicle, the full hybrid costs $4,000 to $7,500 more to produce than a gasoline version. But the hybrid buyer can expect 30% better fuel economy and fewer maintenance and repair costs than a gasoline-only engine. According to recent life-cycle and cost-benefit analyses, conventional hybrids compare favorably to the current generation of EVs.

Toyota is currently the top seller of hybrids, offering 22 models globally that feature the gasoline-electric combination. To date, the Prius has sold over 3 million vehicles worldwide, and has recently expanded to an entire family of models. In 2013, U.S. sales of the Prius were 234,228, of which 30% were registered in the State of California, where the Prius was the top-selling vehicle line in both 2012 and 2013.

The success of the Prius did not occur immediately after introduction. Toyota and Honda built on more than a decade of engineering research funded by DOE and industry. Honda was actually the first company to offer a conventional hybrid in the U.S.—the Insight Hybrid in 1999—but Toyota soon followed in 2000 with the more successful Prius. Ford followed with the Escape Hybrid SUV. The experience with conventional hybrids underscores the long lead times in the auto industry, the multiyear process of commercialization, and the conservative nature of the mainstream U.S. car purchaser.

Fifteen years ago, critics of conventional hybrids argued that the fuel savings would not be enough to justify the cost premium of two propulsion systems, that the batteries would deteriorate rapidly and require expensive replacement, that resale values for hybrids would be discounted, that the batteries might overheat and create safety problems, and that hybrids were practical only for small, light-weight cars. “Early adopters” of the Prius, which carried a hefty price premium for a small car, were often wealthy, highly educated buyers who were attracted to the latest technology or wanted to make a pro-environment statement with their purchase. The process of expanding hybrid sales from early adopters to mainstream consumers took many years to occur, and that process continues today, fifteen years later.

When the EV and the conventional hybrid are compared according to the pace of market penetration in the United States, the EV appears to be more successful (so far). Figure 1 illustrates this comparison by plotting the cumulative number of vehicles sold—conventional hybrids versus EVs—during the first 43 months of market introduction. At month 25, EV sales were about double the number of hybrid sales; at month 40 the ratio of cumulative EV sales to cumulative hybrid sales was about 2.2. The overall size of the new passenger-vehicle market was roughly equal in the two time periods.

When comparing the penetration rates of hybrids and EVs, it is useful to highlight some of the differences in the technologies, policies, and economic environments. The plug-in aspect of the EV calls for a much larger change in the routine behavior of motorists (e.g., nighttime and community charging) than does the conventional hybrid. The early installations of 220-volt home charging stations, which reduce recharging time from 12-18 hours to 3-4 hours, were overly expensive, time-consuming to set up with proper permits, and an irritation to early adopters of EVs. Moreover, the EV owner is more dependent on the decisions of other actors (e.g., whether employers or shopping malls supply charging stations and whether the local utility offers low electricity rates for nighttime charging) than is the hybrid owner.

The success of the conventional hybrid helped EVs get started by creating an identifiable population of potential early adopters that marketers of the EV have exploited. Now, one of the significant predictors of EV ownership is prior ownership of a conventional hybrid. Some of the early EV owners immediately gained HOV access in California, but Prius owners were not granted HOV lane access until 2004, several years after market introduction. California phased out HOV access for hybrids from 2007 to 2011 and now awards the privilege to qualified EV owners.

From a financial perspective, the purchase of the conventional hybrid and EV were not equally subsidized by the government. EV purchasers were enticed by a $7,500 federal tax credit; the tax deduction—and later credit—for conventional hybrid ownership was much smaller, at less than $3,200. Some states (e.g., California and Colorado) supplemented the $7,500 federal tax credit with $1,000 to $2,500 credits (or rebates) of their own for qualified EVs; few conventional hybrid purchasers were provided a state-level purchase incentive. Nominal fuel prices were around $2 per gallon but rising rapidly in 2000-2003, the period when the hybrid was introduced to the U.S. market; fuel prices were volatile and in the $3-$4 per gallon range from 2010-2013 when EVs were initially offered. The roughly $2,000 cost of a Level 2 (220-volt) home recharging station (equipment plus labor for installation) was for several years subsidized by some employers, utilities, government grants, or tax credits. Overall, financial inducements to purchase an EV from 2010 to 2013 were stronger than the inducements for a conventional hybrid from 2000 to 2003, possibly helping explain why the take-up of EVs has been faster.

Comparison with Germany

Another way to assess the success of EV sales in the United States since 2010 is to compare it to another country where EV policies are different. Germany is an interesting comparator because it is a prosperous country with a strong “pro-environment” tradition, a large and competitive car industry, and relatively high fuel prices of $6-$8 per gallon due to taxation. Electricity prices are also much higher in Germany than the U.S. due to an aggressive renewables policy.

Like President Barack Obama, German Prime Minister Angela Merkel has set a goal of putting one million plug-in vehicles on the road, but the target date in Germany is 2020 rather than 2015. Germany has also made a large public investment in R&D to enhance battery technology and a more modest investment in community-based demonstrations of EV technology and recharging infrastructure.

On the other hand, Germany has decided against instituting a large consumer tax credit similar to the €10,000 “superbonus” for EVs that is available in France. Small breaks for EV purchasers in Germany are offered on vehicle sales taxes and registration fees. Nothing equivalent to HOV-lane access is offered to German EV users yet. Germany also offers few subsidies for production of batteries and electric drivetrains and no loan guarantees for new plants to assemble EVs.

Since the German car manufacturers are leaders in the diesel engine market, the incentive for German companies to explore radical alternatives to the internal combustion engine may be tempered. Also, German engineers appear to be more confident in the long-term promise of the hydrogen fuel cell than in cars powered by lithium ion battery packs. Even the conventional hybrid engine has been slow to penetrate the German market, though there is some recent interest in diesel-electric hybrid technology. Daimler and Volkswagen have recently begun to offer EVs in small volumes but the advanced EV technology in BMW’s “i” line is particularly impressive.

FIGURE 1

37

Another key difference between Germany and the U.S. is that Germany has no regulation similar to California’s Zero Emission Vehicle (ZEV) program. The latest version of the ZEV mandate requires each high-volume manufacturer doing business in California to offer at least 15% of their vehicles as EVs or fuel cells by 2025. Some other states (including New York), which account for almost a quarter of the auto market, have joined the ZEV program. The ZEV program is a key driver of EV offerings in the US. In fact, some global vehicle manufacturers have stated publicly that, were it not for the ZEV program, they might not be offering plug-in vehicles to consumers. Since the EU’s policies are less generous to EVs, some big global manufacturers are focusing their EV marketing on the West Coast of the U.S. and giving less emphasis to Europe.

Overall, from 2010 to 2013 Germany has experienced less than half of the market-share growth in EV sales than has occurred in the U.S. The difference is consistent with the view that the policy push in the U.S. has made a difference. The countries in Europe where EVs are spreading rapidly (Norway and the Netherlands) have enacted large financial incentives for consumers coupled with coordinated municipal and utility policies that favor EV purchase and use.

Addressing barriers to adoption of EVs

The EV is not a static technology but a rapidly evolving technological system that links cars with utilities and the electrical grid. Automakers and utilities are addressing many of the barriers to more widespread market diffusion, guided by the reactions of early adopters.

Acquisition cost. The price premium for an EV is declining due to savings in production costs and price competition within the industry. Starting in February 2013, Nissan dropped the base price of the Leaf from $35,200 to $28,800 with only modest decrements to base features (e.g., loss of the telematics system). Ford and General Motors responded by dropping the prices of the Focus Electric and Volt by $4,000 and $5,000, respectively. Toyota chipped in with a $4,620 price cut on the plug-in version of the Toyota Prius (now priced under $30,000), but it is eligible for only a $2,500 federal tax credit. And industry analysts report that the transaction prices for EVs are running even lower than the diminished list prices, in part due to dealer incentives and attractive financing deals.

Dealers now emphasize affordable leasing arrangements, with a majority of EVs in the U.S. acquired under leasing deals. Leasing allays consumer concerns that the batteries may not hold up to wear and tear, that resale values of EVs may plummet after purchase (a legitimate concern), and that the next generation of EVs may be vastly improved compared to current offerings. Leasing deals for under $200 per month are available for the Chevy Spark EV, the Fiat 500e, the Leaf, Daimler’s Smart For Two EV; lease rates for the Honda Fit EV, the Volt and the Ford Focus EV are between $200 and $300 per month. Some car dealers offer better deals than the nationwide leasing plans provided by vehicle manufacturers.

Driving range. Consumer concerns about limited driving range—80-100 miles for most EVs, though the Tesla achieves 200-300 miles per charge—are being addressed in a variety of ways. PHEVs typically have maximum driving ranges that are equal to (or better than) a comparable gasoline car, and a growing body of evidence suggests that PHEVs may attract more retail customers than BEVs. For consumers interested in BEVs, some dealers are also offering free short-term use of gasoline vehicles for long trips when the BEV has insufficient range. The upscale BMW i3 EV is offered with an optional gasoline engine for $3,850 that replenishes the battery as it runs low; the effective driving range of the i3 is thus extended from 80-100 miles to 160-180 miles.

Recharging time. Some consumers believe that the 3-4 hour recharging time with a Level 2 charger is too long. Use of super-fast Level 3 chargers can accomplish an 80% charge in about 30 minutes, although inappropriate use of Level 3 chargers can potentially damage the battery. In the crucial West Coast market, where consumer interest in EVs is the highest, Nissan is subsidizing dealers to make Level 3 chargers available for Leaf owners. BMW is also offering an affordable Level 3 charger. State agencies in California, Oregon, and Washington are expanding the number of Level 2 and Level 3 chargers available along interstate highways, especially Interstate 5, which runs from the Canadian to the Mexican borders.

As of 2013, a total of 6,500 Level 2 and 155 Level 3 charging stations were available to the U.S. public. Some station owners require users to be a member of a paid subscription plan. Tesla has installed 103 proprietary “superchargers” for its Model S that allow drivers to travel across the country or up and down both coasts with only modest recharging times. America’s recharging infrastructure is tiny compared to the 170,000 gasoline stations, but charging opportunities are concentrated in areas where EVs are more prevalent, such as southern California, San Francisco, Seattle, Dallas-Fort Worth, Houston, Phoenix, Chicago, Atlanta, Nashville, Chattanooga, and Knoxville.

Advanced battery and grid systems. R&D efforts to find improved battery systems have intensified. DOE recently set a goal of reducing the costs of battery packs and electric drive systems by 75% by 2022, with an associated 50% reduction in the current size and weight of battery packs. Whether DOE’s goals are realistic is questionable. Toyota’s engineers believe that by 2025 improved solid-state and lithium air batteries will replace lithium ion batteries for EV applications. The result will be a three- to five-fold rise in power at a significantly lower cost of production due to use of fewer expensive rare earths. Lithium-sulfur batteries may also deliver more miles per charge and better longevity than lithium ion batteries.

Researchers are also exploring demand side management of the electrical grid with “vehicle-to-grid” (V2G) technology. This innovation could enable electric car owners to make money by storing power in their vehicles for later use by utilities on the grid. It might cost an extra $1,500 to fit a V2G-enabled battery and charging system to a vehicle but the owner might recoup $3,000 per year from a load-balancing contract with the electric utility. It is costly for utilities to add storage capacity; the motorist already needs the battery for times when the vehicle is in use, so a V2G contract might allow for optimal use of the battery.

Low-price electricity and EV sharing. Utilities and state regulators are also experimenting with innovative charging schemes that will favor EV owners who charge their vehicles at times when electricity demand is low. Mandatory time-of-use pricing has triggered adverse public reactions but utilities are making progress with more modest, incentive-based pricing schemes that favor nighttime and weekend charging. Atlanta is rapidly becoming the EV capital of the southern United States, in part because Georgia’s utilities offer ultra-low electricity prices to EV owners.

A French-based company has launched electric-car sharing programs in Paris and Indianapolis. Modeled after bicycle sharing, consumers can rent an e-car for several hours or an entire day if they need a vehicle for multiple short trips in the city. The vehicle can be accessed with your credit card and returned at any of multiple points in the city. The commercial success of EV sharing is not yet demonstrated, but sharing schemes may play an important role in raising public awareness of the advancing technology.

The EV’s competitors

The future of the EV would be easier to forecast if the only competitor were the current version of the gasoline engine. History suggests, however, that unexpected competitors can emerge that change the direction of consumer purchases.

The EV is certainly not a new idea. In the 1920s, the United States was the largest user of electric cars in the world, and more electric than gasoline-powered cars were sold. Actually, steam-powered cars were among the most popular offerings in that era.

EVs and steam-powered cars lost out to the internal combustion engine for a variety of reasons. Discovery of vast oil supplies made gasoline more affordable. Mass production techniques championed by Henry Ford dropped the price of a gasoline car more rapidly than the price of an electric car. Public investments in new highways connected cities, increased consumer demand for vehicles with long driving range, and therefore reduced the relative appeal of range-limited electric cars, whose value was highest for short trips inside cities. And car engineers devised more convenient ways to start a gasoline-powered vehicle, which caused them to be more appealing to female as well as male drivers. By the 1930s, the electric car lost its place in the market and did not return for many decades.

Looking to the future, it is apparent that EVs will confront intensified competition in the global automotive market. The vehicles described in Table 1 are simply an illustration of the competitive environment.

Vehicle manufacturers are already marketing cleaner gasoline engines (e.g., Ford’s “EcoBoost” engines with turbochargers and direct-fuel injection) that raise fuel economy significantly at a price premium that is much less than the price premium for a conventional hybrid or EV. Clean diesel-powered cars, which have already captured 50% of the new-car market in Europe, are beginning to penetrate the U.S. market for cars and pick-up trucks. Toyota argues that an unforeseen breakthrough in battery technology will be required to enable a plug-in vehicle to match the improving cost-effectiveness of a conventional hybrid.

Meanwhile, the significant reduction in natural gas prices due to the North American shale-gas revolution is causing some automakers to offer vehicles that can run on compressed natural gas or gasoline. Proponents of biofuels are also exploring alternatives to corn-based ethanol that can meet environmental goals at a lower cost than an EV. Making ethanol from natural gas is one of the options under consideration. And some automakers believe that hydrogen fuel cells are the most attractive long-term solution, as the cost of producing fuel cell vehicles is declining rapidly.

TABLE 1

39

As attractive as some of the EV’s competitors may be, it is unlikely that regulators in California and other states will lose interest in EVs. (In theory, the ZEV mandate also gives manufacturers credit for cars with hydrogen fuel cells but the refueling infrastructure for hydrogen is developing even more slowly than it is for EVs). A coalition of eight states, including California, recently signed a Memorandum of Understanding aimed at putting 3.3 million EVs on the road by 2025. The states, which account for 23% of the national passenger vehicle market, have agreed to extend California’s ZEV mandate, hopefully in ways that will allow for compliance flexibility as to exactly where EVs are sold.

ZEV requirements do not necessarily reduce pollution or oil consumption in the near term, since they are not coordinated with national mileage and pollution caps. Thus, when more ZEVs are sold in California and other ZEV states, it frees automakers to sell more fuel-inefficient and polluting vehicles in non-ZEV states. Without better coordination between individual states and the federal policies, the laudable goals of the ZEV mandate could be frustrated.

All things considered, America’s push toward transport electrification is off to a modestly successful start, even though some of the early goals for market penetration were overly ambitious. Automakers were certainly losing money on their early EV models but that was true of conventional hybrids as well. The second generation of EVs now arriving in showrooms is likely to be more attractive to consumers, since they have been refined based on the experiences of early adopters. And as more recharging infrastructure is added, cautious consumers with “range anxiety” may become more likely to consider a BEV, or at least a PHEV.

Vehicle manufacturers and dealers are also beginning to focus on how to market the unique performance characteristics of an EV. Instead of touting primarily fuel savings or environmental virtue, marketers are beginning to echo a common sentiment of early adopters: EVs are enjoyable to drive because, with their relatively high torque and quiet yet powerful acceleration, they are a unique driving experience.

Now is not the right time to redo national EV policies. EVs and their charging infrastructure have not been available long enough to draw definitive conclusions. Vehicle manufacturers, suppliers, utilities, and local governments have made large EV investments with an understanding that federal auto-related policies will be stable until 2017, when a national mid-term policy review is scheduled.

It is not too early to frame some of the key issues that will need to be considered between now and 2017. First, are adequate public R&D investments being made in the behavioral as well as technological aspects of transport electrification? We believe that DOE needs to reaffirm the commitment to better battery technology while giving more priority to understanding the behavioral obstacles to all forms of green vehicles. Second, we question whether national policy should continue a primary focus on EVs. It may be advisable to stimulate a more diverse array of green vehicle technologies, including cars fueled by natural gas, hydrogen, advanced ethanol, and clean diesel fuel. Third, federal mileage and carbon standards may need to be refined to ensure cost-effectiveness and to provide a level playing field for the different propulsion systems. Fourth, highway-funding schemes need to shift from gasoline taxes to mileage-based road user fees in order to ensure that adequate funds are raised for road repairs and that owners of green vehicles pay their fair share. Fifth, California’s policies need to be better coordinated with federal policies in ways that accomplish environmental and security objectives and allow vehicle manufacturers some sensible compliance flexibility. Finally, on an international basis, policy makers in the European Union, Japan, Korea, China, California and the United States should work together to accomplish more regulatory cooperation in this field, since manufacturers of batteries, chargers, and vehicles are moving toward global platforms that can efficiently provide affordable technology to consumers around the world.

Coming to a policy consensus in 2017 will not be easy. In light of the fast pace of change and the many unresolved issues, we recommend that universities and think tanks begin to sponsor conferences, workshops, and white papers on these and related policy issues, with the goal of analyzing the available information to create well-grounded recommendations for action come 2017.

John D. Graham (grahamjd@indiana.edu) is dean, Joshua Cisney is a graduate student, Sanya Carley is an associate professor, and John Rupp is a senior research scientist at the School of Public and Environmental Affairs at Indiana University.

Streamlining the Visa and Immigration Systems for Scientists and Engineers

ALBERT H. TEICH

Current visa policies and regulations pose hurdles for the nation’s scientific and education enterprise. This set of proposals may offer an effective, achievable, and secure way forward.

Alena Shkumatava leads a research group at the Curie Institute in Paris studying how an unusual class of genetic material called noncoding RNA affects embryonic development, using zebrafish as a model system. She began this promising line of research as a postdoctoral fellow at the Massachusetts Institute of Technology’s Whitehead Institute. She might still be pursuing it there or at another institution in the United States had it not been for her desire to visit her family in Belarus in late 2008. What should have been a short and routine trip “turned into a three-month nightmare of bureaucratic snafus, lost documents and frustrating encounters with embassy employees,” she told the New York Times. Discouraged by the difficulties she encountered in leaving and reentering the United States, she left MIT at the end of her appointment to take a position at the Curie Institute.

Shkumatava’s experience, along with numerous variations, has become increasingly familiar—and troublesome for the nation. For the past 60 years, the United States has been a magnet for top science and engineering talent from every corner of the world. The contributions of hundreds of thousands of international students and immigrants have helped the country build a uniquely powerful, productive, and creative science and technology enterprise that leads the world in many fields and is responsible for much of the growth of the U.S. economy and the creation of millions of high-value jobs. A few statistics suggest just how important foreign-born talent is to U.S. science and technology:

  • More than 30% of all Nobel laureates who have won their prizes while working in the United States were foreign-born.
  • Between 1995 and 2005, a quarter of all U.S. high-tech startups included an immigrant among their founders.
  • Roughly 40% of Fortune 500 firms—Google, Intel, Yahoo, eBay, and Apple, among them—were started by immigrants or their children.
  • At the 10 U.S. universities that have produced the most patents, more than three out of every four of those patents involved at least one foreign-born inventor.
  • More than five out of six patents in information technology (IT) in the United States in 2010 listed a foreign national among the inventors.

But the world is changing. The United States today is in a worldwide competition for the best scientific and engineering talent. Countries that were minor players in science and technology a few years ago are rapidly entering the major leagues and actively pursuing scientific and technical talent in the global marketplace. The advent of rapid and inexpensive global communication and air travel that is within easy reach of researchers in many countries have fostered the growth of global networks of collaboration and are changing the way research is done. The U.S. visa and immigration systems need to change, too. Regulations and procedures have failed to keep pace with today’s increasingly globalized science and technology. Rather than facilitating international commerce in talent and ideas, they too often inhibit it, discouraging talented scientific visitors, students, and potential immigrants from coming to and remaining in the United States. They cost the nation the goodwill of friends and allies and the competitive advantage it could gain from their participation in the U.S. research system and from increased international collaboration in cutting-edge research efforts.

It is easy to blame the problems that foreign scientists, engineers, and STEM (science, technology, engineering, and mathematics) students encounter in navigating the U.S. visa and immigration system or the more intense scrutiny imposed on visitors and immigrants in the aftermath of 9/11. Indeed, there is no question that the reaction to the attacks of 9/11 caused serious problems for foreign students and scientific visitors and major disruptions to many universities and other scientific institutions. But many of the security-related issues have been remedied in the past several years. Yet hurdles remain, derived from a more fundamental structural mismatch between current visa and immigration policies and procedures and today’s global patterns of science and engineering education, research, and collaboration. If the United States is going to fix the visa and immigration system for scientists, engineers, and STEM students, it must address these underlying issues as well as those left over from the enhanced security regime of the post-9/11 era.

Many elements of the system need attention. Some of them involve visa categories developed years ago that do not apply easily to today’s researchers. Others derive from obsolescent immigration policies aimed at determining the true intent of foreigners seeking to enter the United States. Still others are tied to concerns about security and terrorism, both pre- and post-9/11. And many arise from the pace at which bureaucracies and legislative bodies adapt to changing circumstances. Here I offer a set of proposals to address these issues. Implementing some of the proposals would necessitate legislative action. Others could be implemented administratively. Most would not require additional resources. All are achievable without compromising U.S. security. Major components of these proposals include:

Simplify complex J-1 exchange visitor visa regulations and remove impediments to bona fide exchange. The J-1 visa is the most widely used type for visitors coming temporarily to the United States to conduct research or teach at U.S. institutions. Their stays may be as brief as a few weeks or as long as five years. The regulations governing the J-1 visa and its various subcategories, however, are complex and often pose significant problems for universities, research laboratories, and the scientific community, as illustrated by the following examples.

Implementing some of the proposals would necessitate legislative action. Others could be implemented administratively. All are achievable without compromising U.S. security.

A young German researcher, having earned a Ph.D. in civil and environmental engineering in his home country, accepted an invitation to spend 17 months as a postdoctoral associate in J-1 Research Scholar status at a prestigious U.S. research university. He subsequently returned to Germany. A year later, he applied for and was awarded a two-year fellowship from the German government to further his research. Although he had a U.S. university eager to host him for the postdoctoral fellowship, a stipulation in the J-1 exchange visitor regulations that disallows returns within 24 months prevented the university from bringing him back in the Research Scholar category. There was no other visa for such a stay, and the researcher ultimately took his talent and his fellowship elsewhere.

A tenured professor in an Asian country was granted a nine-month sabbatical, which he spent at a U.S. university, facilitated by a J-1 visa in the Professor category. He subsequently returned to his country of residence, his family, and his position. An outstanding scholar, described by a colleague as a future Nobel laureate, he was appointed a permanent visiting professor at the U.S. university the following year. Because of the J-1 regulations, however, unless he comes for periods of six months or less when he visits, he cannot return on the J-1 exchange visitor visa. And if he does return for six months or less multiple times, he must seek a new J-1 program document, be assigned a new ID number in the Student and Exchange Visitor Information System (SEVIS), pay a $180 SEVIS fee, and seek a new entry visa at a U.S. consulate before each individual visit. The current J-1 regulations also stipulate that he must be entering the United States for a new “purpose” each time, which could pose additional problems.

The J-1 is one of three visa categories used by most STEM students and professional visitors in scientific and engineering fields coming to the United States: F-1 (nonimmigrant student), J-1 (cultural or educational exchange visitor), or H-1B (temporary worker in a specialty occupation). B1/ B2 visas (visits for business, including conferences, or pleasure or a combination of the two) are also used in some instances. Each of these categories applies to a broad range of applicants. The F-1 visa, for example, is required not just for STEM students but for full-time university and college students in all fields, elementary and secondary school students, seminary students, and students in a conservatory, as well as in a language school (but not a vocational school). Similarly, the J-1 covers exchange visitors ranging from au pairs, corporate trainees, student “interns,” and camp counselors to physicians and teachers as well as professors and research scholars. Another J-1 category is for college and university students who are financed by the United States or their own governments or those participating in true “exchange” programs. The J-1 exchange visitor visa for research scholars and professors is, however, entangled in a maze of rules and regulations that impede rather than facilitate exchange.

In 2006, the maximum period of participation for J-1 exchange visitors in the Professor and Researcher categories was raised from three years to five years. That regulatory change was welcomed by the research community, in which grant funding for a research project or a foreign fellowship might exceed three years, but there was formerly no way to extend the J-1 visa of the researcher.

However, the new regulations simultaneously instituted new prohibitions on repeat exchange visitor program participation. In particular, the regulations prohibit an exchange visitor student who came to the United States to do research toward a Ph.D. (and any member of his family who accompanied him) from going home and then returning to the United States for postdoctoral training or other teaching or research in the Professor or Research Scholar category until 12 months have passed since the end of the previous J program.

A 24-month bar prohibits a former Professor or Researcher (and any member of her family who accompanied her) from engaging in another program in the Professor or Researcher category until 24 months have passed since the end date of the J-1 program. The exception to the bars is for professors or researchers who are hosted by their J program sponsor in the Short-Term Scholar category. This category has a limit of six months with no possibility of extension. The regulations governing this category indicate that such a visitor cannot participate in another stay as a Short-Term Scholar unless it is for a different purpose than the previous visit.

There are valid reasons for rules and regulations intended to prevent exchange visitors from completing one program and immediately applying for another. In other words, the rules should ensure that exchanges are really exchanges and not just a mechanism for the recruitment of temporary or permanent workers. It appears that the regulation was initially conceived to count J-1 program participation toward the five-year maximum in the aggregate. However, as written, the current regulations have had the effect of imposing the 24-month bar on visitors in the Professor and Researcher categories who have spent any period of participation (one month, seven months, or two years), most far shorter than the five-year maximum. Unless such a visitor is brought in under the Short-Term Scholar category (the category exempt from the bars) for six months or less only, the 24-month bar applies. Similarly, spouses of former J-1 exchange visitors in the Professor or Researcher categories who are also researchers in their own right and have spent any period as a J-2 “dependent” while accompanying a J-1 spouse are also barred from returning to the United States to engage in their own J-1 program as a Professor or Researcher until 24 months have passed. This applies whether or not that person worked while in the United States as a J-2. In addition, spouses subject to the two-year home residency requirement (a different, statutory bar based on a reciprocal agreement between the United States and foreign governments) cannot change to J-1 status inside the United States or seek a future J-1 program on their own.

The concept of “exchange,” born in the shadow of the Cold War, must be expanded to include the contemporary realities of worldwide collaboration.

U.S. universities are increasingly engaging in longer-term international research projects with dedicated resources from foreign governments, private industry, and international consortia, and are helping to build capacity at foreign universities, innovation centers, and tech hubs around the world. International researchers travel to the United States to consult, conduct research, observe, and teach the next generation of STEM students. The concept of “exchange,” born in the shadow of the Cold War, must be expanded to include the contemporary realities of worldwide collaboration and facilitate rather than inhibit frequent and repeat stays for varying periods.

In practice, this means rationalizing and simplifying J-1 exchange visitor regulations. Although an immigration reform bill developed in the Senate (S.744) makes several changes in the J-1 program that are primarily aimed at reducing abuses by employers who bring in international students for summer jobs, it does not address issues affecting research scholars or professors.

It may be possible, however, to make the needed changes by administrative means. In December 2008, the Department of State released a draft of revised regulations governing the J-1 exchange visitor visa with a request for comment. Included in the draft rule were changes to program administration, insurance requirements, SEVIS reporting requirements, and other proposed modifications. Although many comments were submitted, until recently there did not appear to be any movement on the provisions of most concern to the research community. However, the department is reported to have taken up the issue again, and a new version of the regulations is anticipated. This may prove to be a particularly opportune time to craft a regulatory fix to the impediments inherent in the 12- and 24-month bars.

Reconsider the requirement that STEM students demonstrate intent to return home. Under current immigration law, all persons applying for a U.S. visa are presumed to be intending to immigrate. Section 214(b) of the Immigration and Naturalization Act, which has survived unchanged since the act was passed in 1952, states, “Every alien shall be presumed to be an immigrant until he establishes to the satisfaction of the consular officer, at the time of application for admission, that he is entitled to a nonimmigrant status…”

In practice, this provision means that a person being interviewed for a nonimmigrant visa, such as a student (F-1) visa, must persuade the consular officer that he or she does not intend to remain permanently in the United States. Simply stating the intent to return home after completion of one’s educational program is not enough. The applicant must present evidence to support that assertion, generally by showing strong ties to the home country. Such evidence may include connections to family members, a bank account, a job or other steady source of income, or a house or other property. For students, especially those from developing nations, this is often not a straightforward matter, and even though U.S. consular officers are instructed to take a realistic view of these young people’s future plans and ties, many visa applicants fail to meet this subjective standard. It is not surprising, therefore, that the vast majority of visa denials, including student visas, are due to 214(b), because of failure to overcome the presumption of immigrant intent.

The Immigration and Naturalization Act was written in an era when foreign students in the United States were relatively rare. In 1954–1955, for example, according to the Institute for International Education, there were about 34,000 foreign students studying in higher education institutions in the United States. In contrast, in 2012–2013 there were more than 819,000 international students in U.S. higher education institutions, nearly two-thirds of them at doctorate-granting universities. In the early post–World War II years, the presence of foreign students was regarded as a form of international cultural exchange. Today, especially in STEM fields, foreign graduate students and postdocs make up a large and increasingly essential element of U.S. higher education. According to recent (2010) data from the National Science Foundation, over 70% of full-time graduate students (master’s and Ph.D.) in electrical engineering and 63% in computer science in U.S. universities are international students. In addition, non-U.S. citizens (not including legal permanent residents) make up a majority of graduate students nationwide in chemical, materials, and mechanical engineering.

In the sense that it prevents prospective immigrants from using student visas as a “back door” for entering the United States (that is, if permanent immigrant status is the main, but unstated, purpose of seeking a student visa), it might be argued that 214(b) is serving its intended purpose. The problem, however, is the dilemma it creates for legitimate students who must demonstrate the intent to return home despite a real and understandable uncertainty about their future plans.

Interestingly, despite the obstacles that the U.S. immigration system poses, many students, especially those who complete a Ph.D. in a STEM field, do manage to remain in the country legally after finishing their degrees. This is possible because employment-based visa categories are often available to them and permanent residence, if they qualify, is also a viable option. The regulations allow F-1 visa holders a 60-day grace period after graduation. In addition, graduating students may receive a one-year extension for what is termed Optional Practical Training (OPT), so long as they obtain a job, which may be a paying position or an unpaid internship. Those who receive a bachelor’s, master’s, or doctorate in a STEM field at a U.S. institution may be granted a one-time 17-month extension of their OPT status if they remain employed.

While on F-1 OPT status, an individual may change status to an H-1B (temporary worker) visa. Unlike the F-1 visa, the H-1B visa does allow for dual intent. This means that the holder of an H-1B visa may apply for permanent resident status—that is, a green card—if highly qualified. This path from student status to a green card, circuitous though it may be, is evidently a popular one, especially among those who receive doctorates, as is shown by the data on “stay rates” for foreign doctorate recipients from U.S. universities.

Michael G. Finn of the Oak Ridge Institute for Science and Education has long tracked stay rates of foreign citizens who receive STEM doctorates in the United States. His 2009 report (the most recent available) indicates that of 9,223 foreign nationals who received science and engineering doctorates at U.S. universities in 1999, two-thirds were still in the United States 10 years later. Indeed, among those whose degrees were in physical and life sciences, the proportion remaining in the United States was about three-quarters.

Reform of 214(b) poses something of a dilemma. Although State Department officials understandably prefer not to discuss it in these terms, they evidently value the broad discretion it provides consular officers to exclude individuals who they suspect, based on their application or demeanor, pose a serious risk of absconding and/or overstaying their visa, but without having to provide specific reasons. One might argue that it is important to give consular officers such discretion, since they are, in most cases, the only officials from either the federal government or the relevant academic institution who actually meet the applicant face-to-face.

On the other hand, 214(b) may also serve to deter many otherwise well-qualified potential students from applying, especially those from developing nations, who could become valuable assets for the United States or their home countries with a U.S. STEM education.

What is needed is a more flexible policy that provides the opportunity for qualified international students who graduate with bachelor’s, master’s, or Ph.D. STEM degrees to remain in the United States if they choose to do so without allowing the student visa to become an easy way to subvert regulations on permanent immigration. It makes no sense to try to make such distinctions by denying the fact that someone who is applying to study in the United States may be uncertain about their plans four (or more) years later.

Because 214(b) is part of the Immigration and Naturalization Act, this problem requires a legislative fix. The immigration reform bill that passed the Senate in June 2013 (S.744) contains a provision that would allow dual intent for nonimmigrant students seeking bachelor’s or graduate degrees. [The provision applies to students in all fields, not just STEM fields. A related bill under consideration in the House of Representatives (H.R.2131) provides dual intent only for STEM students. However, no action has been taken on it to date.] Some version of this approach, which provides for discretion on the part of the consular officer without forcing the student visa applicant to make a choice that he or she is not really capable of making, is a more rational way to deal with this difficult problem.

Speed up the Visas Mantis clearance process and make it more transparent. A major irritant in the visa and immigration system for scientists, engineers, and STEM students over the past decade has been the delays in visa processing for some applicants. A key reason for these delays is the security review process known as Visas Mantis, which the federal government put in place in 1998 and which applies to all categories of nonimmigrant visas. Although reforms over the past several years have eased the situation, additional reforms could further improve the process.

Initially intended to prevent transfers of sensitive technologies to hostile nations or groups, Visas Mantis was used at first in a relatively small number of cases. It gained new prominence, however, in the wake of 9/11 and the heightened concern over terrorism and homeland security that followed. The number of visa applicants in scientific and engineering fields subject to Mantis reviews took a sudden jump in 2002 and 2003, causing a logjam of applications and no end of headaches for the science, engineering, and higher education communities. The number of Mantis reviews leapt from 1,000 cases per year in 2000 to 14,000 in 2002 and an estimated 20,000 in 2003. The State Department and the other federal agencies involved were generally unprepared for the increased workload and were slow to expand their processing capacity. The result was a huge backlog of visa applications and lengthy delays for many foreign students and scientists and engineers seeking to come to the United States. The situation has improved since then, although there have been occasional slowdowns, most likely resulting from variations in workload or staffing issues.

The Mantis process is triggered when a consular officer believes that an applicant might not be eligible for a visa for reasons related to security. If the consular officer determines that security concerns exist, he or she then requests a “security advisory opinion” (SAO), a process coordinated through an office in the State Department in which a number of federal agencies review the application. (The federal government does not provide the names of the agencies involved in an SAO, but the MIT International Scholars Office lists the FBI, CIA, Drug Enforcement Agency, Department of Commerce, Office of Foreign Assets Control, the State Department Bureau of International Security and Nonproliferation, and others, which seems like a plausible list.) Consideration of the application is held up pending approval by all of the agencies. The applicant is not informed of the details of the process, only that the application is undergoing “administrative processing.”

In most cases, the decision to refer an application for an SAO is not mandatory but is a matter of judgment on the part of the consular officer. Because most consular officers do not have scientific or technical training, they generally refer to the State Department’s Technology Alert List (TAL) to determine whether an application raises security concerns. The current TAL is classified, but the 2002 version is believed to be similar and is widely available on the Internet (for example, at http://www.bu.edu/isso/forms/tal.pdf). It contains such obviously sensitive areas as nuclear technology and ballistic missile systems, as well as “dual-use” areas such as fermentation technology and pharmacology, the applications of which are generally regarded as benign but can also raise security concerns. According to the department’s Foreign Affairs Manual, “Officers are not expected to be versed in all the fields on the list. Rather, [they] should shoot for familiarization and listen for key words or phrases from the list in applicants’ answers to interview questions.” It is also suggested that the officers consult with the Defense and Homeland Security attachés at their station. The manual notes that an SAO “is mandatory in all cases of applicants bearing passports of or employed by states designated as state sponsors of terrorism” (currently Cuba, Iran, Sudan, and Syria) engaged in commercial or academic activities in one of the fields included in the TAL. As an aside, it is worth noting that although there are few if any students from Cuba, Sudan, and Syria in the United States, Iran is 15th among countries of origin of international students, ahead of such countries as France, Spain, and Indonesia, and a majority of Iranian students (55%) are majoring in engineering fields.

In the near-term aftermath of 9/11, there were months when the average time to clear a Mantis SAO reached nearly 80 days. Within a year, however, it had declined to less than 21 days, and more recently, despite the fact that the percentage of F-1, J-1, and H-1B applications subject to Mantis SAO processing reached 10% in 2010, according to State Department data, the average processing time is two to three weeks. Nevertheless, cases in which visas are reported to be in “administrative processing” for several months or even longer are not uncommon. In fact, the State Department tells applicants to wait at least 60 days from the date of their interview or submission of supplementary documents to inquire about the status of an application under administrative processing.

In most cases, Mantis clearances for students traveling under F visas are valid for the length of their educational programs up to four years, as long as they do not change programs. However, students from certain countries (e.g., Iran) require new clearances whenever they leave the United States and seek to reenter. Visas Mantis clearances for students and exchange visitors under J visas and temporary workers under H visas are valid for up to two years, unless the nature of their activity in the United States changes. And B visa clearances are good for a year with similar restrictions.

The lack of technical expertise among consular officers is a concern often expressed among scientists who deal with visa and immigration issues. The fact that most such officers are limited in their ability to make independent judgments (for example, on the need for a Mantis review of a researcher applying for a J-1 exchange visitor visa) may well increase the cost of processing the visa as well as lead to unnecessary delays. The National Academy of Sciences report Beyond Fortress America, released in 2009, suggested that the State Department “include expert vouching by qualified U.S. scientists in the non-immigrant visa process for well-known scholars and researchers.” This idea, attractive as it sounds to the science community, seems unlikely to be acceptable to the State Department. Although “qualified U.S. scientists” could attest to the scientific qualifications and reputations of the applicants, they would not be able to make informed judgments on potential security risks and therefore could not substitute for Mantis reviews.

An alternative that might be more acceptable would be to use scientifically trained staff within the State Department—for example, current and former American Association for the Advancement of Science (AAAS) Science and Technology Policy Fellows or Jefferson Science Fellows sponsored by the National Academies—as advisers to consular officers. Since 1980, AAAS has placed over 250 Ph.D. scientists and engineers from a wide range of backgrounds in the State Department as S&T Policy Fellows. Over 100 are still working there. In the 2013–2014 fellowship year, there were 31. In addition, there were 13 Jefferson Science Fellows—tenured senior faculty in science, engineering, or medicine—at the State Department or the Agency for International Development, a number that has grown steadily each year since the program was started in 2004. These highly qualified individuals, a few of whom are already stationed at embassies and consulates, should be available on an occasional basis to augment consular officers’ resources. They, and other Foreign Service Officers with technical backgrounds, would be especially useful in countries that send large numbers of STEM students and visitors to the United States, such as China, India, and South Korea.

Measures that enhance the capacity of the State Department to make technical judgments could be implemented administratively, without the need for legislative action. A policy that would limit the time available for the agencies involved in an SAO to review an application could also be helpful. Improving the transparency of the Mantis process poses a dilemma. If a visa applicant poses a potential security risk, the government can hardly be expected to inform the applicant about the details of the review process. Nevertheless, since the vast majority of Mantis reviews result in clearing the applicant, it might be beneficial to both the applicant and the government to provide periodic updates on the status of the review without providing details, making the process at least seem a little less Kafkaesque.

Allow scientists and scholars to apply to renew their visas in the United States. Many students, scholars, and scientists are in the United States on long-term programs of study, research, or teaching that may keep them in the country beyond the period of validity of their visas. Although U.S. Citizenship and Immigration Services (USCIS) is able to extend immigration status as necessary to cover these programs, approval of status extension from USCIS is not the same thing as a valid visa that would enable international travel. Often, due to the need to attend international conferences, attend to personal business, or just visit family, students and scholars can find themselves in a situation where they have temporarily departed the United States but are unable to return without extensive delays for processing a visa renewal abroad. As consular sections may be uncomfortable positively adjudicating visa applications for those outside of their home country, it is not uncommon for applicants to be asked to travel from a third country back to their country of origin for visa processing, resulting in even greater expense and delay.

Until June 2004, the Department of State allowed many holders of E, H1-B, L, and O visas to apply for visa renewal by mail. This program was discontinued in the wake of 9/11 because of a mixture of concerns over security, resource availability, and the implementation of the then-new biometric visa program. Now, however, every nonimmigrant visa holder in the United States has already had electronic fingerprints collected as part of their visa record. Security screening measures have been greatly improved in the past decade. In addition, the Omnibus Spending Bill passed in early 2014 included language directing the State Department to implement a pilot program for the use of videoconferencing technology to conduct visa interviews. The time is right to not only reinstitute the practice of allowing applications for visa renewal inside the United States for those categories previously allowed, but also to expand the pool of those eligible for domestic renewal to include F-1 students and J-1 academic exchanges.

What is needed is a more flexible policy that provides the opportunity for qualified international students to remain in the United States without allowing the student visa to become an easy way to subvert regulations on permanent immigration.

Reform the H-1B visa to distinguish R&D scientists and engineers from IT outsourcers. Discussion of scientists, engineers, and STEM students has received relatively little attention in the current debate on immigration policy, with one significant exception: the H-1B visa category. This category covers temporary workers in specialty occupations, including scientists and engineers in R&D (as well as, interestingly enough, fashion models of “distinguished merit and ability”). An H-1B visa is valid for three years, extendable for another three. The program is capped at 65,000 each fiscal year, but an additional 20,000 foreign nationals with advanced degrees from U.S. universities are exempt from this ceiling, and all H-1B visa holders who work at universities and university- and government-affiliated nonprofits, including national laboratories are also exempt.

Controversy has swirled about the H-1B program for the past several years as advocates of the program, citing shortages of domestic talent in several fields, have sought to expand it, while critics, denying the existence of shortages, express concern instead about unemployment and underemployment among domestically trained technical personnel and have fought expansion. Moreover, although the H-1B visa is often discussed as if it were a means of strengthening U.S. innovation by bringing more scientists and engineers to the United States or retaining foreign scientists and engineers who have gained a degree in this country, the program increasingly seems to serve a rather different purpose. Currently, the overwhelming majority of H-1B recipients work in computer programming, software, and IT. In fact, the top H-1B visa job title submitted by U.S. employers in fiscal 2013 was programmer analyst, followed by software engineer, computer programmer, and systems analyst. At least 21 of the top 50 job titles were in the fields of computer programming, software development, and related areas. The top three company sponsors of H-1B visa recipients were IT firms (Infosys Limited, Wipro, and Tata Consultancy Services, all based in India) as were a majority of the top 25. Many of these firms provide outsourcing of IT capabilities to U.S. firms with foreign (mainly Indian) staff working under H-1Bs. This practice has come under increasing scrutiny recently as the largest H-1B sponsor, Infosys, paid a record $34 million to settle claims of visa abuse brought by the federal government. Visa abuse aside, it is difficult to see how these firms and the H-1B recipients they sponsor contribute to strengthening innovation in the United States.

Reform of the H-1B program has been proposed for years, and although little action has been taken so far, this may change soon as the program is under active discussion as part of the current immigration debate. Modifications included in the Senate bill (S.744) would affect several important provisions of the program. The annual cap on H-1B visas would be increased from 65,000 to a minimum of 115,000, which could be raised to 180,000. The exemption for advanced degree graduates would be increased from 20,000 to 25,000 and would be limited to STEM graduates only. Even more important, the bill would create a new merit-based point system for awarding permanent residency permits (green cards). Under it, applicants would receive points for education, the number increasing from bachelor’s to doctoral degrees. Although there would be a quota for these green cards, advanced degree recipients from U.S. universities would be exempt, provided the recipient received his or her degree from an institution with a Carnegie classification of “very high” or “high” research activity, has an employment offer from a U.S. employer, and received the degree no more than five years before applying. This would be tantamount to “stapling a green card to the diploma”—terminology suggested by some advocates—and would bypass the H-1B program entirely.

The Senate bill retains the exemption of visa holders who work at universities and university- and government-affiliated nonprofits from the H-1B cap. Expanding this exemption to include all Ph.D. scientists and engineers engaged in R&D is also worth considering, although it does not appear to be part of either the Senate or the House bills. This would put Ph.D. researchers and their employers in a separate class from the firms that use the program for outsourcing of IT personnel. It would remove the issues relating to H-1B scientists and engineers from the debate over outsourcing and allow them to be discussed on their own merits—namely, their contribution to strengthening R&D and innovation in the United States.

Expand the Visa Waiver Program to additional countries. The Visa Waiver Program (VWP) allows citizens of a limited number of countries (currently 37) to travel to the United States for certain purposes without visas. Although it does not apply to students and exchange visitors under F and J visas, it does include scientists and engineers attending conferences and conventions who would otherwise travel under a B visa, as well as individuals participating in short-term training (less than 90 days) and consulting with business associates.

There is little doubt that the ability to travel without going through the visa process—application, interview, security check—greatly facilitates a visit to the United States for those eligible. The eligible countries include mainly the European Union nations plus Australia, New Zealand, South Korea, Singapore, and Taiwan. Advocates of reforming visa policy make a convincing argument that expanding the program to other countries would increase U.S. security. Edward Alden and Liam Schwartz of the Council on Foreign Relation suggest just that in a 2012 paper on modernizing the U.S. visa system. They note that travelers under the VWP are still subject to the Electronic System of Travel Authorization (ESTA), a security screening system that vets individuals planning to come to the United States with the same intelligence information that is used in visa screening. Security would be enhanced rather than diminished by expanding the VWP, they argue, because governments of the countries that participate in the program are required to share security and criminal intelligence information with the U.S. government.

Visa-free travel to conferences and for short-term professional visits by scientific and engineering researchers from the 37 countries in the VWP makes collaboration with U.S. colleagues much easier than it would otherwise be. And it would undoubtedly be welcomed by those in countries that are likely candidates for admission to the program. Complicating matters, however, is legislation that requires the Department of Homeland Security (DHS) to implement a biometric exit system (i.e., one based on taking fingerprints of visitors as they leave the country and matching them with those taken on entry) before it can expand the VWP. The federal government currently has a “biographic” system that matches names on outbound manifests provided by the airlines with passport information obtained by U.S. Customs and Border Protection on a person’s entry. A biometric exit system would provide enhanced security, but the several-billion-dollar cost and the logistics of implementing a control system pose formidable barriers. Congress and the Executive Branch have engaged in a tug of war over the planning and development of such a system for over a decade. (The Intelligence Reform and Terrorism Prevention Act of 2004 called for DHS to develop plans for accelerating implementation of such a system, but the department has missed several deadlines and stated in mid-2013 that it was intending to incorporate these plans in its budget for fiscal year 2016.) Should DHS get to the point of actually implementing a biometric exit system, it could pave the way for expanding the VWP. In the meantime, a better solution would be to decouple the two initiatives. S.744 does just that by authorizing the Secretary of Homeland Security to designate any country as a member of the VWP so long as it meets certain conditions. Expansion of the VWP is also included in the House immigration reform bill known as the JOLT Act. These are hopeful signs, although the comprehensive immigration reform logjam continues to block further action.

Action in several other areas can also help to improve the visa process. The federal government, for example, can encourage consulates to use their recently expanded authority to waive personal interviews. In response to an executive order issued by President Obama in January 2012, the State Department initiated a two-year visa interview waiver pilot program. Under the program, visa-processing posts in 28 countries were authorized to waive interviews with certain visa applicants, especially repeat visitors in a number of visa classes. Brazil and China, which have large numbers of visa applicants, were among the initial countries involved in this experimental program. U.S. consulates in India joined the program a few months later. The initiative was welcomed in these countries and regarded as successful by the Departments of State and Homeland Security. The program was made permanent in January 2014. Currently, consular officers can waive interviews for applicants for renewal of any nonimmigrant visa as long as they are applying for a visa in the same classification within 12 months of the expiration of the initial visa (48 months in some visa classes).

Although the interview waiver program was not specifically aimed at scientists, and statistics regarding their participation in the program are not available, it seems likely that they were and will continue to be among the beneficiaries now that the program has been made permanent. The initiative employs a risk-based approach, focusing more attention on individuals who are judged to be high-risk travelers and less on low-risk persons. Since it allows for considerable discretion on the part of the consulate, its ultimate value to the scientific and educational communities will depend on how that discretion is used.

The government can also step up its efforts to increase visa-processing capacity. In response to the 2012 executive order, the State Department and DHS launched an initiative to increase visa-processing capacity in high-demand countries and reduce interview wait times. In a report issued in August 2012 on progress during the first 180 days of activity under the initiative, the two agencies projected that by the end of 2012, “State will have created 50 new visa adjudicator positions in China and 60 in Brazil.” Furthermore, the State Department deployed 220 consular officers to Brazil on temporary duty and 48 to China. The consulates also increased working hours, and in Brazil they remained open on occasional Saturdays and holidays. These moves resulted in sharp decreases in processing time.

These initiatives have been bright spots in an otherwise difficult budget environment for the State Department. That budget environment, exacerbated by sequestration, increases the difficulty of making these gains permanent and extending them to consular posts in other countries with high visa demand. This is a relatively easy area to neglect, but one in which modest investments, especially in personnel and training, could significantly improve the face that the United States presents to the world, including the global scientific, engineering, and educational communities.

Looking at U.S. universities and laboratories today, one might well ask whether there really is a problem with the nation’s visa and immigration policies. After all, the diversity of nationalities among scientists, engineers, and students in U.S. scientific institutions is striking. At the National Institutes of Health, over 60% of the approximately 4,000 postdocs are neither U.S. citizens nor permanent residents. They come from China, India, Korea, and Japan, as well as Europe and many other countries around the world. The Massachusetts Institute of Technology had over 3,100 international students in 2013, about 85% of them graduate students, representing some 90 countries. The numbers are similar at Stanford, Berkeley, and other top research universities.

So how serious are the obstacles for international scientists and students who really want to come to the United States? Does the system really need to be streamlined? How urgent are the fixes that I have proposed here?

The answers to these questions lie not in the present and within the United States, but in the future and in the initiatives of the nations with which we compete and cooperate. Whereas the U.S. system creates barriers, other countries, many with R&D expenditures rising much more rapidly than in the United States, are creating incentives to attract talented scientists to their universities and laboratories. China, India, Korea, and other countries with substantial scientific diasporas have developed programs to encourage engagement with their expatriate scientists and potentially draw them back home.

In the long run, the reputations of U.S. institutions alone will not be sufficient to maintain the nation’s current advantage. The decline in enrollments among international students after 9/11 shows how visa delays and immigration restrictions can affect students and researchers. As long as the United States continues to make international travel difficult for promising young scholars such as Alena Shkumatava, it is handicapping the future of U.S. science and the participation of U.S. researchers in international collaborations. Streamlining visa and immigration policies can make a vital contribution to ensuring the continued preeminence of U.S. science and technology in a globalized world. We should not allow that preeminence to be held hostage to the nation’s inability to enact comprehensive immigration reform.

Albert H. Teich (ateich@gwu.edu) is research professor of science, technology, and international affairs at the Elliott School of International Affairs at George Washington University, Washington, DC. Notes and acknowledgements are available at http://alteich.com/visas/Notes.htm.

Forum

Climate deadlock

In “Breaking the Climate Deadlock” (Issues, Summer 2014), David Garman, Kerry Emanuel, and Bruce Phillips present a thoughtful proposal for greatly expanded public- and private-sector R&D aimed at reducing the costs, increasing the reliability, managing the risks, and expanding the potential to rapidly scale up deployment of a broad suite of low- and zero-carbon energy technologies, from renewables to advanced nuclear reactor technologies to carbon capture and storage. They also encourage dedicated funding of research into potential geoengineering technologies for forced cooling of the climate system. Such an “all-of-the-above” investment strategy, they say, might be accepted across the political spectrum as a pragmatic hedge against uncertain and potentially severe climate risks and hence be not only sensible but feasible to achieve in our nation’s highly polarized climate policy environment.

It is a strong proposal as far as it goes. Even as the costs of wind and solar photovoltaics are declining, and conservative states such as Texas and Kansas are embracing renewable energy technologies and policies, greater investment in research aimed at expanding the portfolio of commercially feasible and socially acceptable low-carbon electricity is needed to accelerate the transition to a fully decarbonized energy economy. And managing the risks of a warming planet requires contingency planning for climate emergencies. As challenging as it may be to contemplate the deployment of most currently proposed geoengineering schemes, our nation has a responsibility to better understand their technical and policy risks and prospects should they ultimately need to be considered.

But it does not go far enough. Garman et al.’s focus on R&D aimed primarily at driving down the “cost premium” of low-carbon energy technologies relative to fossil fuels neglects the practical need and opportunity to also incorporate into the political calculus the substantial economic risks and costs of unmitigated climate change. Yet these risks and costs are substantial and are becoming increasingly apparent to local civic and political leaders in red and blue states alike as they are faced with more extensive storm surges and coastal flooding, more frequent and severe episodes of extreme summer heat, and other climate-related damages.

The growing state and local experience of this “cost of inaction premium” for continued reliance on fossil fuels is now running in parallel with the experience of economic benefits resulting from renewable electricity standards and energy efficiency standards in several red states. Together, these state and local experiences may do as much as or more than expanding essential investments in low-carbon energy R&D to break the climate deadlock and rebuild bipartisan support for sensible federal climate policies.

PETER C. FRUMHOFF
Director of Science and Policy Union of Concerned Scientists Cambridge, Massachusetts
pfrumhoff@ucsusa.org

 

We need a new era of environmentalism to overcome the polarization surrounding climate change issues, one that takes conservative ideas and concerns seriously and ultimately engages ideological conservatives as full partners in efforts to reduce carbon emissions.

Having recently founded a conservative animal and environmental advocacy group called Earth Stewardship Alliance (esalliance.org), I applaud “Breaking the Climate Deadlock.” The authors describe a compelling policy framework for expanding low-carbon technology options in a way that maintains flexibility to manage uncertainties.

5

The article also demonstrates the most effective approach to begin building conservative support for climate policies in general. The basic elements are to respect conservative concerns about climate science and to promote solutions that are consistent with conservative principles. Although many climate policy advocates see conservatives as a lost cause, relatively little effort has been made to try this approach.

Thoughtful conservatives generally agree that carbon emissions from human activities are increasing global carbon dioxide levels, but they question how serious the effects will be. These conservatives are often criticized for denying the science even though, as noted by “Breaking the Climate Deadlock,” there is considerable scientific uncertainty surrounding the potential effects. This article, however, addresses this legitimate conservative skepticism by describing how a proper risk assessment justifies action to avoid potentially catastrophic impacts even if there is significant uncertainty.

The major climate policies that have been advanced thus far in the United States are also contrary to conservative principles. All of the cap-and-trade bills that Congress seriously considered during the 2000s would have given away emissions allowances, making the legislation equivalent to a tax increase. The rise in prices caused by a cap-and-trade program’s requirement to obtain emissions allowances is comparable to a tax. Giving away the allowances foregoes revenue that could be used to reduce other taxes and thus offset the cap-and-trade tax. Many climate policy advocates wanted the allowances to be auctioned, but that approach could not gain traction in Congress, because the free allowances were needed to secure business support.

After the failure of cap-and-trade, efforts turned to issuing Environmental Protection Agency (EPA) regulations that reduce greenhouse gas emissions. The EPA’s legal authority for the regulations is justified by some very general provisions of the Clean Air Act. Although the courts will probably uphold many of these regulations, the policy decisions involved are too big to be properly made by the administration without more explicit congressional authorization.

Despite the polarization surrounding climate change, there continues to be support in the conservative intelligentsia for carbon policies consistent with their principles: primarily ramping up investment in low-carbon technology research, development, and demonstration and a “revenue-neutral” carbon tax in which the increased revenues are offset by cutting other taxes.

Earth Stewardship Alliance believes the best way to build strong conservative support for these policies is by making the moral case for carbon emissions reductions, emphasizing our obligation to be good stewards. We are hopeful that conservatives will ultimately decide it is the right thing to do.

JIM PRESSWOOD
Executive Director Earth Stewardship Alliance Arlington, Virginia
info@esalliance.org

 

David Garman, Kerry Emanuel, and Bruce Phillips lay out a convincing case for the development of real low-carbon technology options. This is not just a theoretical strategy. There are some real opportunities before us right now to do this, ones that may well appeal across the political spectrum:

The newly formed National Enhanced Oil Recovery Initiative (a coalition of environmental groups, utilities, labor, oil companies, coal companies, and environmental and utility regulators) has proposed a way to bring carbon capture and storage projects to scale, spurring in-use innovation and driving costs down. Carbon dioxide captured from power plants has a value—as much as $40 per ton in the Gulf region—because it can be used to recover more oil from existing fields. Capturing carbon dioxide, however, costs about $80 per ton. A tax credit that would cover the difference gap could spur a substantial number of innovative projects. Although oil recovery is not the long-term plan for carbon capture, it will pay for much capital investment and the early innovation that follows in its wake. The initiative’s analysis suggests that the net impact on the U.S. Treasury is likely to be neutral, because tax revenue from domestic oil that displaces imports can equal or exceed the cost of the tax credit.

There are dozens of U.S.-originated designs for advanced nuclear power reactors that could dramatically improve safety, lower costs, and shrink wastes as well as making them less harmful. The cost of pushing these designs forward to demonstration are modest, likely in the range of $1 billion to $2 billion per year, or about half a percent of the nation’s electric bill. The United States remains the world’s center of nuclear innovation, but many companies, frustrated by the lack of U.S. government support, are looking to demonstrate their first-of-a-kind designs in Russia and China. This is a growth-generating industry that the United States can recapture.

The production tax credit for conventional wind power has expired, due in part to criticisms that the tax credit was simply subsidizing current technology that has reached the point of diminishing cost reductions. But we can replace that policy with a focused set of incentives for truly innovative wind energy designs that increase capacity and provide grid support, thus enhancing the value of wind energy and bringing it closer to market parity.

Gridlock over climate science needn’t prevent practical movement forward to hedge our risks. A time-limited set of policies such as those above would drive low-carbon technology closer to parity with conventional coal and gas, not subsidize above-market technologies indefinitely. Garman and his colleagues have offered an important bridge-building concept; it is time for policymakers to take notice and act.

ARMOND COHEN
Executive Director Clean Air Task Force Boston, Massachusetts
armond@catf.us

Archives

Twister

To create his self-portrait, Twister, Dan Collins, a professor of intermedia in the Herberger Institute School of Art at Arizona State University (ASU), spun on a turntable while being digitally scanned. The data were recorded in 1995, but he had to wait more than five years before he could find a computer with the ability to do what he wanted. He used a customized computer to generate a model based on the data. Collins initially produced a high-density foam prototype of the sculpture, and later created an edition of three bonded marble versions of the work, one of which is in the collection of ASU’s Art Museum.

96

DAN COLLINS, Twister, 3D laser-scanned figure, Castable bonded marble, 84′′ high, 1995–2012.

Natural Histories

400 Years of Scientific Illustration from the Museum’s Library

In a time of the internet, social media networks, and smart phones, when miraculous devices demand our attention with beeps, buzzes, and spiffy animations, it’s hard to imagine a time when something as quiet and unassuming as a book illustration was considered cutting-edge technology. Yet, since the early 1500s, illustration has been essential to scientists for sharing their work with colleagues and with the public.

57-01

Young Hippo This image from the Zoological Society of London provides two views of a young hippo in Egypt before being transported to the London Zoo. Joseph Wolf (1820–1899) based the image on a sketch made by the British Consul on site in Cairo.

57-02

Rhino by Dürer This depiction of a rhino from Historia animalium, by German artist Albrecht Dürer, inaccurately features ornate armor, scaly skin, and odd protrusions.

58-01

Mandrill This mandrill (Mandrillus sphinx), with its delicate hands, cheerful expression, and almost upright posture, seems oddly human. While many images in Johann Christian Daniel von Schreber’s Mammals Illustrated (1774–1846) were quite accurate, those of primates generally were not.

58-02

Darwin’s Rhea John Gould drew this image of a Rhea pennata, a flightless bird native to South America, for The zoology of the voyage of H.M.S. Beagle (1839–1843), a five-volume work edited by Charles Darwin. The specimens Darwin collected during his travels on H.M.S. Beagle became a foundation for Darwin’s theory of evolution by natural selection.

The nearly forgotten books stored away quietly in libraries contain the ancestral ideas of current practices and methodologies of illustration.

A variety of printing techniques, ranging from woodcuts to engraving to lithography, proved highly effective for spreading new knowledge about nature and human culture to a growing audience. Illustrated books allowed the lay public to share in the excitement of discoveries, from Antarctica to the Amazon, from the largest life forms to the microscopic.

59-01

Two-toed Sloth Albertus Seba’s (1665–1736) four-volume Thesaurus (after Thesaurus of animal specimens) illustrated the Dutch apothecary’s enormous collection of animal and plant specimens amassed over the years. Using preserved specimens, Seba’s artists could depict anatomy accurately—but not behavior. For example, this two-toed sloth is shown climbing upright, even though in nature, sloths hang upside down.

The early impact of illustration in research, education, and communication arguably formed the foundation for how current illustration and imaging techniques are utilized today. Now scientists have a vast number of imaging tools that are harnessed in a variety of ways: infrared photography, scanning electron microscopes, computed tomography scanners and more. But there is still a role for illustration in making the invisible visible. How else can we depict extinct species such as dinosaurs?

59-02

Egg Collection In his major encyclopedia of nature, Allgemeine Naturgeschichte für alle Stände (A general natural history for everyone), German naturalist Lorenz Oken (1779–1851) grouped animals based not on science, but philosophy. Nevertheless, his encyclopedia proved to be a popular and enduring work. Here Oken is illustrating variation in egg color and markings found among water birds.

60

Paper Nautilus Italian naturalist Giuseppe Saverio Poli (1746–1825) is considered to be the father of malacology—the study of mollusks. In his landmark work Testacea utriusque Siciliae…(Shelled animals of the Two Sicilies…), Poli first categorized mollusks by their internal structure, and not just their shells, as seen in his detailed illustration of female paper nautilus (Argonauta argo).

61-01

Octopus From Conrad Gessner’s Historia animalium (1551–1558), this octopus engraving is a remarkably good likeness—except for the depiction of round, rather than slit-shaped, pupils—indicating the artist clearly did not draw from a live specimen.

61-03

Siphonophores German biologist Ernst Haeckel illustrated and described thousands of deep-sea specimens collected during the 1873–1876 H.M.S. Challenger expedition, and used many of those images to create Kunstformen der Natur (Art forms of nature). Haeckel used a microscope to capture the intricate structure of these siphonophores—colonies of tiny, tightly packed and highly specialized organisms—that look (and sting!) like sea jellies.

61-02

Angry Puffer Fish and Others Louis Renard’s artists embellished their work to satisfy Europeans’ thirst for the unusual. Some illustrations in Poissons, écrevisses et crabes, de diverses couleurs et figures extraordinaires…, like this one, include fish with imaginative colors and patterns and strange, un-fishlike expressions.

Illustration is also used to clearly represent complex structures, color graduations, and other essential details. The nearly forgotten books stored away quietly in libraries contain the ancestral ideas of current practices and methodologies of illustration.

In the days before photography and printing, original art was the only way to capture the likeness of organisms, people, and places, and therefore the only way to share this information with others,” said Tom Baione, the Harold Boeschenstein Director of the Department of Library Services at the American Museum of Natural History. “Printed reproductions of art about natural history enabled many who’d never seen an octopus, for example, to try to begin to understand what an octopus looked like and how its unusual features might function.”

62

Frog Dissection A female green frog (Pelophylax kl. esculentus) with egg masses is shown in dissection above a view of the frog’s skeleton in the book Historia naturalis ranarum nostratium…(Natural history of the native frogs…) from 1758. Shadows and dissecting pins add to the realism.

63-01

Pineapple with Caterpillar In Metamorphosis insectorum Surinamensium…(1719), German naturalist and artist Maria Sibylla Merian documented the flora and fauna she encountered during her two-year trip to Surinam, in South America, with her daughter. Here she creatively depicts a pineapple hosting a butterfly and a red-winged insect, both shown in various stages of life.

The impact and appeal of printing technologies is at the heart of the 2012 book edited by Tom Baione, Natural Histories: Extraordinary Rare Book Selections from the American Museum of Natural History Library. Inspired by the book, the current exhibit at the museum, Natural Histories: 400 Years of Scientific Illustration from the Museum’s Library, explores the integral role illustration has played in scientific discovery through 50 large-format reproductions from seminal holdings in the Museum Library’s Rare Book collection. The exhibition is on view through October 12, 2014 at the American Museum of Natural History in New York City. All images © AMNH\D. Finnin.

63-02

Tasmanian Tiger English ornithologist and taxidermist John Gould’s images and descriptions for the three-volume work The mammals of Australia (1863) remain an invaluable record of Australian animals that became increasingly rare with European settlement. The “Tasmanian tiger” pictured here was actually a thylacine (Thylacinus cynocephalus), the world’s largest meat-eating marsupial until going extinct in 1936.

64

JUSTINE SEREBRIN, Gateway, Digital painting, 40 × 25 inches, 2013.

What Fish Oil Pills Are Hiding

DAVID SCHLEIFER

ALISON FAIRBROTHER

One Woman’s Quest to Save the Chesapeake Bay from the Dietary Supplement Industry

Julie Vanderslice thought fish were disgusting. She didn’t like to look at them. She didn’t like to smell them. Julie lived with her mother, Pat, on Cobb Island, a small Maryland community an hour south and a world away from Washington, D.C. Her neighbors practically lived on their boats in warm weather, fishing for stripers in the Chesa-peake Bay or gizzard shad in shallow creeks shaded by sycamores. Julie had grown up on five acres of woodland in Accokeek, Maryland, across the Potomac River from Mount Vernon, George Washington’s plantation home. The Potomac River wetlands in Piscataway Park were a five-minute bike ride away, on land the federal government had kept wild to preserve the view from Washington’s estate. Her four brothers and three sisters kept chickens, guinea pigs, dogs, cats, and a tame raccoon. They went fishing in the Bay as often as they could. But Julie preferred interacting with the natural world from inside, on a comfortable couch in her living room, where she read with the windows open so she could catch the briny smell of the Bay. “No books about anything slimy or smelly, thank you!” she told her family at holidays.

So it was with some playfulness that Pat’s friend Ray showed up on Julie’s doorstep one afternoon in the summer of 2010 to present her with a book called The Most Important Fish in the Sea. Ray was an avid recreational fisherman, who lived ten miles up the coast on one of the countless tiny inlets of the Chesapeake. The Chesapeake Bay has 11,684 miles of shoreline—more than the entire west coast of the United States; the watershed comprises 64,000 square miles.

“It’s about menhaden, small forage fish that grow up in the Chesapeake and migrate along the Atlantic coast. You’ll love it,” he told her, chuckling as he handed over the book. “But seriously, maybe you’ll be moved by it,” he said, his tone changing. “It says that when John Smith came here in the seventeenth century, there were so many menhaden in the Bay that he could catch them with a frying pan.”

Julie shuddered at the image of so many slippery little fish.

“Now the menhaden are vanishing,” Ray said. “I want you to read this book. I want Delegate Murphy to read this book. And I want the two of you to do something about it.”

Julie was the district liaison for Delegate Peter Murphy, a Democrat representing Charles County in the Maryland House of Delegates. She had started working for Murphy in February 2009 as a photographic intern, tasked with documenting his speeches and meetings with constituents. In her early fifties, Julie was older than the average intern. For ten years, she had sold women’s cosmetics and men’s fragrances at a Washington, D.C. branch of Woodward & Lothrop, until the legendary southern department store chain liquidated in 1995. She had moved to Texas to take a job at another department store in Houston, but it hadn’t felt right. Julie was a Marylander. She needed to live by the Chesapeake Bay. Working in local politics reconnected her to her community, and it wasn’t long before Murphy asked her to join his staff full-time. Now, she worked in his office in La Plata, the county seat, and attended events in the delegate’s stead—like the dedication of a new volunteer firehouse on Cobb Island or the La Plata Warriors high school softball games.

Julie picked up the menhaden book one summer afternoon, pretty sure she wouldn’t make it past the first chapter. She examined the cover, which featured a photo of a small silvery fish with a wide, gaping mouth and a distinctive circular mark behind its eye. “This is the most important fish in the sea?” Julie muttered to herself. She settled back into her sofa and sighed. Her mother was out at a church event, probably chattering away with Ray. Connected to the main-land by a narrow steel-girder bridge, Cobb Island was a tiny spit of land less than a mile long where the Potomac River meets the Wicomico. The island’s population was barely over 1,100. What else was there to do? She turned to the first page and began to read.

For the next few days, The Most Important Fish in the Sea followed Julie wherever she went. She read it out on the porch while listening to the gently rolling waters of Neale Sound, which separated Cobb Island from the mainland. She read it in bed, struggling to keep her eyes open so she could fit in just one more chapter. She finished the book one afternoon just as Pat came through the screen door, arms laden with a bag full of groceries. Pat found Julie standing in the middle of the living room, angrily clutching the book. Pat was dumbfounded. “You don’t like to pick crab meat out of a crab!” she said. “You wear water-shoes at the beach! Here you are all worked up over menhaden!”

Menhaden are a critical link in the Atlantic food chain, and the Chesapeake Bay is critical to the fish’s lifecycle. Menhaden eggs hatch year round in the open ocean, and the young fish swim into the Chesa-peake to grow in the warm, brackish waters. Also known colloquially as bunker, pogies, or alewifes, they are the staple food for many commercially important predator fish, including striped bass, bluefish, and weakfish, which are harvested along the coast in a dozen different states, as well as for sharks, dolphins, and blue whales. Ospreys, loons, and seagulls scoop menhaden from the top of the water column, where the fish ball together in tight rust-colored schools. As schools of menhaden swim, they eat tiny plankton and algae. As a result of their diet, menhaden are full of nutrient-rich oils. They are so oily that when ravaged by a school of bluefish, for example, menhaden will leave a sheen of oil in their wake.

Wayne Levin

Imagine seeing what you think is a coral reef, only to realize that there is movement within the shape and that it is actually a massive school of fish. That is what happened to Wayne Levin as he swam in Hawaii’s Kealakekua Bay on his way to photograph dolphins. The fish he encountered were akule, the Hawaiian name for big-eyed scad. In the years that followed he developed a fascination with the beauty and synchronicity of these schools of akule, and he spent a decade capturing them in thousands of photographs.

Akule have been bountiful in Hawaii for centuries. Easy to see when gathering in the shallows, the dense schools form patterns, like unfurling scrolls, then suddenly contract into a vortex before unfurling again and moving on. In his introduction to Akule (2010, Editions Limited), a collection of Levin’s photos, Thomas Farber describes a photo session: “What transpired was a dance, dialogue, or courtship of and with the akule….Sometimes, for instance, he faced away from them, then slowly turned, and instead of moving away the school would…come towards him. Or, as he advanced, the school would open, forming a tunnel for him. Entering, he’d be engulfed in thousands of fish.”

Wayne Levin has photographed numerous aspects of the underwater world: sea life, surfers, canoe paddlers, divers, swimmers, shipwrecks, seascapes, and aquariums. After a decade of photographing fish schools, he turned from sea to sky, and flocks of birds have been his recent subject. His photographs are in the collections of the Museum of Modern Art, New York; the Museum of Photographic Arts, San Diego; The Honolulu Museum of Art; the Hawaii State Foundation on Culture and the Arts, Honolulu; and the Mariners’ Museum, Newport News, Virginia. His work has been published in Aperture, American Photographer, Camera Arts, Day in the Life of Hawaii, Photo Japan, and most recently LensWork. His books include Through a Liquid Mirror (1997, Editions Limited), and Other Oceans (2001, University of Hawaii Press). Visit his website at waynelevinimages.com.

Alana Quinn

25

WAYNE LEVIN, Column of Akule, 2000.

26

WAYNE LEVIN, Filming Akule, 2006.

For hundreds of years, people living along the Atlantic Coast caught menhaden for their oils. Some scholars say the word menhaden likely derives from an Algonquian word for fertilizer. Pre-colonial Native Americans buried whole menhaden in their cornfields to nourish their crops. They may have taught the Pilgrims to do so, too.

The colonists took things a step further. Beginning in the eighteenth century, factories along the East Coast specialized in cooking menhaden in giant vats to separate their nutrient-rich oil from their protein—the former for use as fertilizer and the latter for animal feed. Dozens of menhaden “reduction” factories once dotted the shoreline from Maine to Florida, belching a foul, fishy smell into the air.

Until the middle of the twentieth century, menhaden fishermen hauled thousands of pounds of net by hand from small boats, coordinating their movements with call-and-response songs derived from African-American spirituals. But everything changed in the 1950s with the introduction of hydraulic vacuum pumps, which enabled many millions of menhaden to be sucked out of the ocean each day—so many fish that companies had to purchase carrier ships with giant holds below deck to ferry the menhaden to shore. According to National Oceanic and Atmospheric Administration records, in the past sixty years, the reduction industry has fished 47 billion pounds of menhaden out of the Atlantic and 70 billion pounds out of the Gulf of Mexico.

Reduction factories that couldn’t keep up went out of business, eliminating the factory noises and fishy smells, much to the relief of the growing number of wealthy home-owners purchasing seaside homes. By 2006, every last company had been bought out, consolidated, or pushed out of business—except for a single conglomerate called Omega Protein, which operates a factory in Reedville, a tiny Virginia town halfway up the length of the Chesapeake Bay. A former petroleum company headquartered in Houston and once owned by the Bush family, Omega Protein continues to sell protein-rich fishmeal for aquaculture, animal feed for factory farms, menhaden oil for fertilizer, and purified menhaden oil, which is full of omega-3 fatty acids, as a nutritional supplement. For the majority of the last thirty years, the Reedville port has landed more fish than any other port in the continental United States by volume.

The company also owns two factories on the shores of the Gulf of Mexico, which grind up and process Gulf menhaden, the Atlantic menhaden’s faster-growing cousin. But Hurricane Katrina in 2005, followed by the 2010 Deepwater Horizon oil disaster in the Gulf of Mexico, forced Omega Protein to rely increasingly on Atlantic menhaden to make up for their damaged factories and shortened fishing seasons in the Gulf—much to the dismay of fishermen and residents along the Atlantic coast.

These days, on a normal morning in Reedville, Virginia, a spotter pilot climbs into his plane just after sunrise to scour the Chesapeake and Atlantic coastal waters, searching for reddish-brown splotches of menhaden. When he spots them, the pilot signals to ship captains, who surround the school with a net, draw it close, and vacuum the entire school into the ship’s hold.

Julie Vanderslice had never seen the menhaden boats or spotter planes, but she was horrified by the description of the ocean carnage documented in The Most Important Fish in the Sea. The author, H. Bruce Franklin, is an acclaimed scholar of American history and culture at Rutgers University, who has written treatises on everything from Herman Melville to the Vietnam War. But he is also a former deckhand who fishes several times a week in Raritan Bay, between New Jersey and Staten Island.

Julie was riveted by a passage in which Franklin describes going fishing one day for weakfish in his neighbor’s boat. Weakfish are long, floppy fish that feed lower in the water column than bluefish, which thrash about on top. Franklin’s neighbor angled his boat toward a chaotic flock of gulls screaming and pounding the air with their wings. The birds were diving into the water and fighting off muscular bluefish to be the first to reach a school of menhaden. The two men had a feeling that weakfish would be lurking below the school of menhaden, attempting to pick off fish from the bottom. But before Franklin and his neighbor could reach the school, one of Omega Protein’s ships sped past, set a purse seine around the menhaden, and used a vacuum pump to suck up hundreds of thousands of fish and all the bluefish and weakfish that had been feeding on them. For days afterward, Franklin observed, there were hardly any fish at all in Raritan Bay.

That moment compelled Franklin to uncover the damage Omega Protein was doing up and down the coast. The company’s annual harvest of between a quarter and a half billion pounds of menhaden had effects far beyond depleting the once-plentiful schools of little fish. Scientists and environmental advocates contended that by vacuuming up menhaden for fishmeal and fertilizer, Omega Protein was pulling the linchpin out of the Atlantic ecosystem: starving predator fish, marine mammals, and birds; suffocating sea plants on the ocean floor; and pushing an entire ocean to the brink of collapse. Despite being published by a small environmental press, The Most Important Fish in the Sea was lauded in the Washington Post, The Philadelphia Inquirer, The Baltimore Sun, and the journal Science. The New York Times discussed it on its opinion pages, citing dead zones in the Chesapeake Bay and Long Island Sound where too few menhaden were left to filter algae out of the water.

After finishing the book, Julie couldn’t get menhaden out of her head. She had to get the book into Delegate Murphy’s hands. She bought a second copy, prepared a two-page summary, and plotted her strategy.

Julie didn’t see the delegate every day because she worked in his district office rather than in Annapolis, the country’s oldest state capitol in continuous legislative use. But that summer, Delegate Murphy was campaigning for re-election and was often closer to home. He was scheduled to make an appearance at a local farmer’s market in Waldorf a few weeks after Julie had finished the book. Waldorf was at the northern edge of Murphy’s district, close enough to Washington that the weekly farmers market would be crowded with an evening rush of commuters on their way home from D.C. But in the late afternoon, the delegate’s staff, decked out in yellow Peter Murphy T-shirts, nearly outnumbered the shoppers browsing for flowers and honey.

Delegate Murphy was at ease chatting with neighbors and shaking hands with constituents. He was tall and thin, with salt-and-pepper hair and lively eyes. Julie recognized his trademark campaign uniform: a blue polo shirt tucked neatly into slacks. He had been a science teacher before entering state politics, and he had a deep, calming voice. As a grandfather to two young children, he knew how to captivate a skeptical audience with a good story. Julie recalled the day she first met him, at a sparsely attended town hall meeting at Piccowaxen Middle School. He struck her immediately as a genuine, thoughtful man on the right side of the issues she cared about. Several months later, she heard that Delegate Murphy was speaking at the Democratic Club on Cobb Island and made a point to attend. Afterward, she waited for him in the receiving line. When it was her turn to speak, Julie asked if he was hiring.

28

WAYNE LEVIN, Ring of Akule, 2000.

Just a few short years later, Julie felt comfortable enough with Delegate Murphy to propose a mission. Mustering her courage as a band warmed up at the other end of the market, Julie seized her moment. “Delegate Murphy, you have to read this!” she said, pushing the book into his hands. “There’s this fish called menhaden that you’ve never heard of. One company in Virginia is vacuuming millions of them out of the Chesapeake Bay, taking menhaden out of the mouths of striped bass and osprey and bluefish and dolphins and all the other fish and animals that rely on them for nutrients. This is why recreational fishermen are always complaining about how hungry the striped bass are! This is why our Bay ecosystem is so unhinged! One company is taking away all our menhaden,” she declared. “We have to stop them.”

Delegate Murphy peered at her with a trace of a smile. “I’ll read it, Julie,” he said.

For months afterward, Julie stayed late at the office, reading everything she could find about menhaden. She learned that every state along the Atlantic Coast had banned menhaden fishing in state waters—except Virginia, where Omega Protein’s Reedville plant was based, and North Carolina, where a reduction factory had recently closed. (North Carolina would ban menhaden reduction fishing in 2012.) The largest slice of Omega Protein’s catch came from Virginia’s ocean waters and from the state’s portion of the Chesapeake Bay, preventing those fish from swimming north into Maryland’s section of the Bay and south into the Atlantic to populate the shores of fourteen other states along the coast.

Beyond the Chesapeake, Omega Protein’s Virginia-based fleet could pull as many menhaden as they wanted from federal waters, designated as everything between three and two hundred miles offshore, from Maine to Florida. Virginia was a voting member of the Atlantic States Marine Fisheries Commission (ASMFC), the agency that governs East Coast fisheries. But the ASMFC had never taken any steps to limit the amount of fish Omega Protein could lawfully catch along the Atlantic coast. Virginia’s legislators happened to be flush with campaign contributions from Omega Protein.

Julie clicked through articles on fisherman’s forums and coastal newspapers from every eastern state. She read testimony from citizens who described how the decimation of the menhaden population in the Chesapeake and in federal waters had affected the entire Atlantic seaboard. Bird watchers claimed that seabirds were suffering from lack of menhaden. Recreational fishermen cited scrawny bass and bluefish, and wondered whether they were lacking protein-packed menhaden meals. Biologists cut the stomachs of gamefish and found fewer and fewer menhaden inside. Whale watchers drove their boats farther out to sea in search of blue whales, which used to breach near the shore, surfacing open-mouthed upon oily schools of menhaden. The dead zones in the Chesapeake Bay grew larger, and some environmentalists connected the dots: menhaden were no longer plentiful enough to filter the water as they had in the past. In 2010, the ASMFC estimated that the menhaden population had declined to a record low, and was nearly 90 percent smaller than it had been twenty-five years ago.

Of course, Omega Protein had its own experts on staff, whose estimates better suited the company’s business interests. At a public hearing in Virginia about the menhaden fishery, Omega Protein spotter pilot Cecil Dameron said, “I’ve flown 42,000 miles looking at menhaden…. I’m here to tell you that the menhaden stock is in better shape than it was twenty years ago, thirty years ago. There’s more fish.”

One humid evening at the end of August, Delegate Murphy held a pre-election fundraiser and rally in his backyard, a grassy spot that sloped down toward the Potomac River. Former Senator Paul Sarbanes stopped by, and campaign staffers brought homemade noodle salads, cheeses, and a country ham. With the election less than two months away, the staff was working overtime, but they had hit their fundraising goal for the day. At the end of the event, as constituents headed to their cars, Delegate Murphy found Julie sitting at one of the collapsible tables littered with used napkins and glasses of melting ice. Julie was accustomed to standing for hours when she worked in the department store, but there was something about fundraising that made her feel like putting her feet up.

29

WAYNE LEVIN, Circling Akule, 2000.

30

WAYNE LEVIN, Rainbow Runners Hunting Akule, 2001.

“Great work tonight, Peter,” she said, wearily raising her glass to him. Julie always called him Delegate Murphy in public. But between the two of them, at the end of a long summer afternoon, it was just Peter.

He toasted and sat down beside her. Campaign staffers were clearing wilted chrysanthemums from the tables and stripping off plastic tablecloths. Peter and Julie looked across the lawn at the blue-gray Potomac as the sun began to dip in the sky.

“Listen,” he said. “I think I have an idea for a bill we could do.”

“On?”

“On menhaden.”

Julie put her drink down so quickly it sloshed onto the sticky tablecloth. She leaned forward in disbelief.

“We’ve got to try doing something about this,” Peter said.

Julie put her hand to her mouth and shook her head. “Menhaden reduction fishing has been banned in Maryland since 1931. Omega Protein is in Virginia. How could a bill in Maryland affect fishing there?”

“We don’t have any control over Virginia’s fishing industry, but we can control what’s sold in our state. I got to thinking: what if we introduced a bill that would stop the sale of products made with menhaden?”

“Do you think it would ever pass?” Julie asked.

“If we did a bill, it would first come before the Environmental Matters Committee. I think the chair of the committee would be amenable. At least we can put it out there and let people talk about it.”

Julie was overcome. He didn’t have to tell her how unusual this was. The impetus for new legislation didn’t often come from former interns—or from their fishermen neighbors.

“But I don’t know if we can win this on the environmental issues alone. What about the sport fishermen? Can we get them to come to the hearing?” Peter asked.

Julie began jotting notes on a napkin.

“Can you find out how many tourism dollars Maryland is losing because the striped bass are going hungry?”

“I’ll get in touch with the sport fishermen’s association and see if I can look up the numbers. And I’ll try to find out which companies are distributing products made from menhaden. It’s mostly fertilizer and animal feed. A little of it goes into fish oil pills, too.”

“The funny thing is, my own doctor told me to take fish oil pills a few years ago,” Peter said. He patted Julie’s shoulder and stood to wave to the last of his constituents as they disappeared down the driveway.

Doctors like Peter’s wouldn’t have recommended fish oil if a Danish doctor named Jörn Dyerberg hadn’t taken a trip across Greenland in 1970. Dyerberg and his colleagues, Hans Olaf Bang and Aase Brondum Nielsen, traveled from village to village by dogsled, poking inhabitants with syringes. They were trying to figure out why the Inuit had such a low incidence of heart disease despite eating mostly seal meat and fatty fish. Dyerberg and his team concluded that Inuit blood had a remarkably high concentration of certain types of polyunsaturated fatty acids, a finding that turned heads in the scientific community when it was published in The Lancet in 1971. The researchers argued that those polyunsaturated fatty acids originated in the fish that the Inuit ate and hypothesized that the fatty acids protected against cardiovascular disease. Those polyunsaturated fatty acids eventually came to be known as omega-3 fatty acids.

Other therapeutic properties of fish oil had been recognized long before Dyerberg’s expedition. During World War I, Edward and May Mellanby, a husband-and-wife team of nutrition scientists, found that it cured rickets, a crippling disease that had left generations of European and American children incapacitated, with soft bones, weak joints, and seizures. (The Mellanbys’ research was an improvement on the earlier work of Dr. Francis Glisson of Cambridge University, who, in 1650, advised that children with rickets should be tied up and hung from the ceiling to straighten their crooked limbs and improve their short statures.)

The Mellanbys tested their theories on animals instead of children. In their lab at King’s College for Women in London, in 1914, they raised a litter of puppies on nothing but oat porridge and watched each one come down with rickets. Several daily spoonfuls of cod liver oil reversed the rickets in a matter of weeks. Edward Mellanby was awarded a knighthood for their discovery. Although May had been an equal partner in the research, she wasn’t accorded the equivalent honor. A biochemist at the University of Wisconsin named Elmer McCollum read the Mellanbys’ research and isolated the anti-rachitic substance in the oil, which eventually came to be called vitamin D. McCollum had already isolated vitamin A in cod-liver oil, as well as vitamin B, which he later figured out was, in fact, a group of several substances. McCollum actually preferred the term “accessory food factor” rather than “vitamin.” He initially used letters instead of names because he hadn’t quite figured out the structures of the molecules he had isolated.

31

WAYNE LEVIN, School of Hellers Barracuda, 1999.

Soon, mothers were dosing their children daily with cod liver oil, a practice that continued for decades. Peter Murphy, who grew up in the 1950s, remembered being forced to swallow the stuff. The pale brown liquid stank like rotten fish, and he would struggle not to gag. Oil-filled capsules eventually supplanted the thick, foul liquid, and cheap menhaden replaced dwindling cod as the source of the oil. Meanwhile, following Dyerberg’s research into the Inuit diet, studies proliferated about the effects of omega-3 fatty acids—which originate in algae and travel up the food chain to forage fish like menhaden and on into the predator fish that eat them.

In 2002, the American Heart Association reviewed 119 of these studies and concluded that omega-3s could reduce the incidence of heart attack, stroke, and death in patients with heart disease. The AHA insisted omega-3s probably had no benefit for healthy people and suggested that eating fish, flax, walnuts, or other foods containing omega-3s was “preferable” to taking supplements. They warned that fish and fish oil pills could contain mercury, PCBs, dioxins, and other environmental contaminants. Nonetheless, they cautiously suggested that patients with heart disease “could consider supplements” in consultation with their doctors.

Americans did more than just “consider” supplements. In 2001, sales of fish oil pills were only $100 million. A 2009 Forbes story called fish oil “one supplement that works.” By 2011, sales topped $1.1 billion. Studies piled up suggesting that omega-3s and fish oil could do everything from reducing blood pressure and systemic inflammation to improving cognition, relieving depression, and even helping autistic children. Omega Protein was making most of its money turning menhaden into fertilizer and livestock feed for tasteless tilapia and factory-farmed chicken. But dietary supplements made for better public relations than animal feed. They put a friendlier, human face on the business, a face Peter and Julie were about to meet.

On a warm afternoon in March 2011, the twenty-four members of the Maryland House of Delegates Environmental Matters Committee filed into the legislature and took their seats. Delegate Murphy sat at the front of the room next to H. Bruce Franklin, author of The Most Important Fish in the Sea, who had traveled from New Jersey to testify at the hearing. Julie Vanderslice chose a spot in the packed gallery, with her neighbor Ray, who brought his copy of Franklin’s book in hopes of getting it signed. Julie brought her own copy, which she had bought already signed, but which she hoped Franklin would inscribe with a more personal message.

The Environmental Matters Committee was the first stop for Delegate Murphy’s legislation. The committee would either endorse the bill for review by the full House of Delegates, strike it down immediately, or send the bill limping back to Peter Murphy’s desk for further review—in which case, it might take years for menhaden to receive another audience with Maryland legislators. If the bill made it to the full House of Delegates, however, it might quickly be taken up for a vote before summer recess. If it passed the House, it was on to the Maryland Senate and, finally, to the Governor’s desk for signature before it became law. It could be voted down at any step along the way, and Julie knew there was a real chance the bill would never make it out of committee.

Julie had heard that Omega Protein’s lobbyists had been swarming the Capitol, taking dozens of meetings with delegates, and that the lobbyists had brought Omega Protein’s unionized fishermen with them. There was nothing like the threat of job loss to derail an environmental bill. Julie bit her thumb and surveyed the gallery.

To her right, rows of seats were filled with a few recreational anglers, conservationists, and scientists, whom Delegate Murphy’s legislative aide had invited to the hearing, but Julie didn’t see representatives from any of the region’s environmental organizations, like the Chesapeake Bay Foundation or the League of Conservation Voters. Delegate Murphy had called Julie on a Sunday to ask her to ask those organizations to submit letters in support of the bill. That type of outreach was not part of her job as district liaison, but she was happy to do it. While the organizations did support the bill in writing, none of them sent anyone to the hearing in person.

Instead, the seats were filled with fishermen from Omega Protein, who wore matching yellow shirts and sat quietly while the vice president of their local union, in a pinstripe suit, leaned over a row of chairs and spoke to them in a hushed voice. At the far side of the room, Candy Thomson, outdoors reporter at The Baltimore Sun, began jotting notes into her pad.

“We’re now going to move to House Bill 1142,” said Democratic Delegate Maggie McIntosh, chair of the Environmental Matters Committee.

As Delegate Murphy spoke, Julie shifted nervously in her seat. The legislators looked confused. She thought she saw one of them riffle through the stack of papers in front of her, as if to remind himself what a menhaden was. Julie wondered how many had even bothered to read the bill before the hearing. But Delegate Murphy knew the talking points backward and forward: the menhaden reduction industry had taken 47 billion pounds of menhaden out of the Atlantic Ocean since 1950. Omega Protein landed more fish, pound for pound, than any other operation in the continental United States. There had never been a limit on the amount of menhaden Omega Protein could legally fish using the pumps that vacuumed entire schools from the sea.

“This bill simply comes out and says that we as a state will no longer participate, regardless of the reason, in the decline of this fish,” he told the committee.

After Peter Murphy finished his opening statement, he and Bruce Franklin began taking questions. One of the delegates held up a letter from the Virginia State Senate. “It says that this industry goes back to the nineteenth century and that the plant this bill targets has been in operation for nearly a hundred years and that some employees are fourth-generation menhaden harvesters.” As she spoke, she paged through letters from the union that represented some of those harvesters and a list of products made from menhaden. “I don’t understand why we would interrupt an industry that has this kind of history, that will affect so many people. In this economy, I think this is the wrong time to take such a drastic approach to this issue.”

Delegate Murphy nodded. “We in Maryland, and particularly in Southern Maryland, grew tobacco for a lot longer than a hundred years,” he said, “but when we realized it was the wrong crop, and that it was killing people, we switched over to other alternatives. And we’re doing that to this day. What we’re saying with this is there are alternatives. You don’t have to fish this fish. This particular company, which happens to be in Virginia, does have alternatives to produce the same products.” He continued, “We have a company here in Maryland that produces the same omega-3 proteins and vitamins, and it uses algae. It grows and harvests algae. And that’s a sustainable resource.”

33

WAYNE LEVIN, Amberjacks Under a School of Akule, 2007.

34

WAYNE LEVIN, Great Barracuda Surrounded by Akule, 2002.

Another delegate, his hands clasped in front of him, addressed the chamber. “I’m sympathetic to saving this resource and to managing this resource appropriately,” he said. But, he explained, he been contacted by one of his constituents, a grandmother whose grandson Austin suffered from what she called “a rare life-threatening illness.” Glancing down at his laptop, he began reading a letter from this worried grandmother. “There is a bill due to be discussed regarding the menhaden fish. These fish supply the omega oils so vital to the Omegaven product that supplies children like Austin with necessary fats through their IV lines. Many children would have died due to liver failure from traditional soy-based fats had these omega-3s in these fish not been discovered. Can you please contact someone from the powers that be in the Maryland government and tell them not to put an end to the use of these fish and their life-sustaining oils.” The delegate closed his laptop. “This is a question from one of my constituents on a life-threatening issue. Can one of the experts address that issue?”

Bruce Franklin tried to explain that there are other sources of omega-3 besides menhaden. Delegate Murphy stepped in and offered to amend the bill to exempt pharmaceutical-grade products. But it was too late. Less than an hour after it had begun, the hearing was over. Delegate Murphy withdrew the bill for “summer study” rather than see it voted down—a likely indicator that the bill would not resurface before the legislature anytime soon, if ever. Delegate McIntosh turned to the next bill on the day’s schedule, and Omega Protein’s spokesperson and lead scientist left the gallery, smiling.

Julie turned to Ray, who was sitting beside her, angrily gripping his copy of The Most Important Fish in the Sea. She wanted to console him but felt uncertain how to begin. “Your fishermen buddies seem ready to riot in the streets,” Julie said uncertainly, gesturing at the anglers who were huddled together as they walked stiffly toward the foyer. “That story about the kid who’d die without his menhaden oil—that came out of nowhere.”

She looked again at the text of the bill. “A person may not manufacture, sell, or distribute a product or product component obtained from the reduction of an Atlantic menhaden.” It was exactly the kind of forward-thinking bill Maryland needed, and it would have sent a message to the other Atlantic states that menhaden were important enough to fight for. It had been her first real step toward making policy, but now she felt crushed by the legislature’s complete lack of will to preserve one of Maryland’s most significant natural resources. It seemed to her the delegates had acted without any attempt to understand the magnitude of the problem or the benefits of the proposed solution.

All Julie wanted to do was head back down to Cobb Island, stand on the dock, and feel the evening breeze on her face. Instead, she had to drive into the humid chaos of Washington, D.C., to spend two days sightseeing with her sister and her nephews. All weekend long, as her family traipsed from the Lincoln Memorial to the National Gallery to Ford’s Theater, she thought about what had gone wrong at the hearing. Had she and Delegate Murphy aimed too high with their bill? Did the committee members understand the complexity of the ecosystem that menhaden sustained? Even when the facts and figures are clear, sometimes a good story is too compelling. What politician could choose an oily fish over a sick child?

Barely a year after the Environmental Matters Committee hearing in Annapolis, the luster of fish oil pills began to fade. In 2010, environmental advocates Benson Chiles and Chris Manthey tested for toxic contaminants in fish oil supplements from a variety of manufacturers and found polychlorinated biphenyls, or PCBs, in many of the pills. PCBs, a group of compounds once widely used in coolant fluids and industrial lubricants, were banned in the 1970s because they decreased human liver function, caused skin ailments, and caused liver and bile duct cancers. PCBs don’t easily break down in the environment; they remain in waterways like those that empty into the Chesapeake Bay, where they get absorbed by the algae and plankton eaten by fish like menhaden.

35

WAYNE LEVIN, Pattern of Akule, 2002.

36

WAYNE LEVIN, Akule Tornado, 2000.

The test results led Chiles and Manthey to file a lawsuit, under California’s Proposition 65, that named supplement manufacturers Omega Protein and Solgar as well as retailers like CVS and GNC for failing to provide adequate warnings to consumers that the fish oil pills they were swallowing with their morning coffee contained unsafe levels of PCBs. In February 2012, Chiles and Manthey reached a settlement with some manufacturers and the trade association that represents them, called the Global Organization for EPA and DHA Omega-3s (GOED), which agreed on higher safety standards for contaminants in fish oil pills.

Meanwhile, in July 2012, The New England Journal of Medicine published a study that assessed whether fish oil pills could help prevent cardiovascular disease in people with diabetes. Of the 6,281 diabetics in the study who took the pills, the same number had heart attacks and strokes as those in the placebo group. Nearly the same number died. Were all those fish-scented burps for naught? A Forbes story asked: “Fish oil or snake oil?”

In September 2012, The Journal of the American Medical Association published even worse news. A team of Greek researchers had analyzed every previous study of omega-3 supplements and cardiovascular disease, and found that omega-3 supplementation did not save lives, prevent heart attacks, or prevent strokes. GOED, the fish oil trade association, was predictably displeased. Its executive director told a supplement industry trade journal, “Given the flawed design of this meta-analysis…, GOED disputes the findings and urges consumers to continue taking omega-3 products.” But the scientific evidence was mounting: not only were fish oil pills full of dangerous chemicals, but they probably weren’t doing much to prevent heart disease, either.

Why did these pills look so promising in 2001 and not great by 2012? The American Heart Association had always favored dietary sources of omega-3s, like fish and nuts, over pills. Jackie Bosch, a scientist at McMaster University and an author of The New England Journal of Medicine study, speculated that now that people with diabetes and heart disease take so many other medicines—statins, diuretics, ACE-inhibitors, and handfuls of other pills—the effect of fish oil may be too marginal to show any measurable benefit.

Julie wasn’t surprised when she heard about the lawsuit. She knew menhaden could soak up chemical contaminants in the waterways. She read news reports about the recent studies on fish oil pills with interest and wondered whether they would give her and Delegate Murphy any ammunition for future efforts to limit the sale of menhaden products in their state. Neither had forgotten about the lowly menhaden.

Delegate Murphy had developed the habit of searching the dietary supplements aisle each time he went to the drug-store, turning the heavy bottles of fish oil capsules in his hands and reading the ingredients. None of the bottles ever listed menhaden. Despite the settlement in the California lawsuit, fish oil manufacturers were not required—and are still not required—to label the types of fish included in supplements, making it difficult for consumers to know whether they contained menhaden oil or not. But Delegate Murphy had made it clear he wasn’t ready to take up the menhaden issue again without a reasonable chance of success. Julie didn’t press him on his decision.

Then in December 2012, increasing public pressure about the decline of menhaden finally led to a change. The Atlantic States Marine Fisheries Commission voted to reduce the harvest of menhaden by 20 percent from previous levels, a regulation that would go into effect during the 2013 fishing season. It was the first time any restriction had been placed on the menhaden industry’s operations in the Atlantic, although the cut was far less severe than independent scientists had recommended. To safeguard the menhaden’s ability to spawn without undue pressure from the industry’s pumps and nets, scientists had advised reducing the harvest by 50 to 75 percent of current catch levels. Delegate Murphy and Julie knew 20 percent wasn’t nearly enough to bring the menhaden stocks back up to support the health of the Bay. But it was a start. They liked to think their bill had moved the conversation forward a little bit.

That Christmas, down on Cobb Island, Julie was putting stamps on envelopes for her family’s annual holiday recipe exchange. She addressed one to her brother Jerry in Arkansas. He didn’t usually come back east for the holidays, preferring to fly home in the summer when his sons could fish for croakers off the dock that ran out into the Wicomico River behind Julie’s house. Jerry worked for Tyson Foods, selling chicken to restaurant chains. Julie had asked him once if Tyson fed their chickens with menhaden meal, and Jerry had admitted he wasn’t sure. Whatever the factory-farmed chickens ate, Julie wasn’t taking any chances. After the hearing on the menhaden bill, she became a vegetarian. For Christmas, she was sending her family recipes for eggless egg salad and an easy bean soup.

When she finished sealing the last envelope, Julie pulled on a turtleneck sweater and grabbed her winter coat for the short walk up to the post office. The sky was a pale, dull gray, and it smelled of snow. She had recently read Omega Protein’s latest report to its investors, and as she trudged slowly toward Cobb Island Road, a word from the text popped into her mind. Company executives had repeatedly made the point that Omega Protein was “diversifying.” They had purchased a California-based dietary supplement supplier that sourced pills that didn’t use fish products. They had begun talking about proteins that could be extracted from dairy and turned into nutritional capsules. Could it be that Omega Protein had begun to see the writing on the wall? Maybe they were starting to realize that the menhaden supply was not unlimited—and that advocates like Julie wouldn’t let them take every last one.

As she passed the Cobb Island pier, a few seagulls were circling mesh crab traps that had been abandoned on the dock—traps that brimmed with blue crabs in the summer-time. Julie pulled her coat closer around her against the chill. She thought ahead to the summer months, when the traps would be baited with menhaden and checked every few hours by local families, and the ice cream parlor would open to serve the seasonal tourists. By the end of summer, Omega Protein would be winding down its fishing season, and the company would likely have 20 percent less fish in their industrial-sized cookers than they did the year before. Would that be enough to help the striped bass and the osprey and the humpback whales? Julie wondered. And the thousands of fishermen whose livelihoods depended upon pulling healthy fish from the Chesapeake Bay? And the families up and down the coast who brought those fish home to eat?

Julie had done a lot of waiting in her time. She had waited her whole life to find a job like the one she had with Delegate Murphy. She had waited for the delegate to get excited about the menhaden. When their bill failed, she had waited for the ASMFC to pass regulations protecting the menhaden. Now she would have to wait a little longer to find out whether the ASMFC’s first effort at limiting the fishery would enable the menhaden population to recover. But there are two kinds of waiting, Julie thought. There’s the kind where you have no agency, and then there’s the kind where you are at the edge of your seat, ready to act at a moment’s notice. Julie felt she could act. And so could Ray, Delegate Murphy, Bruce Franklin, and the sport fishermen, who now cared even more about the oily little menhaden. For now, at least until the end of the fishing season, that had to be enough. They would just have to wait and see.

David Schleifer (david.schleifer@gmail.com) is a senior research associate at Public Agenda, a nonpartisan, nonprofit research and engagement organization. Alison Fairbrother (alison@publictrustproject.org) is the executive director of the Public Trust Project.

Editor’s Journal: Telling Stories

KEVIN FINNERAN

“The universe is composed of stories, not of atoms” Muriel Rukeyser wrote in her poem “The Speed of Darkness.” Good stories are not merely the collection of individual events; they are a means of expressing ideas in concrete terms at human scale. They have the ability to accomplish the apparently simple but rarely achieved task of seamlessly linking the general with the specific, of giving ideas flesh and blood.

This edition of Issues includes three articles that use narrative structure to address important science and technology policy topics. They are the product of a program at Arizona State University that was directed by writer and teacher Lee Gutkind and funded by the National Science Foundation. Begun in 2010, the Think, Write, Publish program began by assembling two dozen young writers and scientists/engineers to work in teams to prepare articles that use a narrative approach to engage readers in an S&T topic. Lee organized a training program that included several workshops and opportunities to meet with editors from major magazines and book publishers. Several of the writer/expert teams prepared articles that were published in Issues: Mary Lee Sethi and Adam Briggle on the federal Bioethics Commission, Jennifer Liu and Deborah Gardner on the global dimension of medicine and ethics, Gwen Ottinger and Rachel Zurer on environmental monitoring, and Ross Carper and Sonja Schmid on small modular nuclear reactors.

Encouraged by the enthusiasm for the initial experiment, they decided to do it again. A second cohort, again composed of 12 scholars and 12 writers, was selected in 2013. They participated in two week-long workshops. At the first meeting teams were formed, guest editors and writers offered advice, Lee and his team provided training, and the teams began their work. Six months later the teams returned for a second week-long workshop during which they worked intensively revising and refining the drafts they had prepared. They also received advice from some of the participants from the first cohort.

They learned that policy debates do not lend themselves easily to narrative treatments, that collaborative writing is difficult, that professional writers and scholars approach the task of writing very differently and have sometimes conflicting criteria for good writing. But they persisted, and now we are proud to present three of the articles that emerged from the effort. Additional articles written by the teams can be found at http://thinkwritepublish.org/.

These young authors are trailblazers in the quest to find a way to make the public more informed and more engaged participants in science, technology, and health policy debates. They recognize narrative as a way to ground and humanize discussions that are too often conducted in abstract and erudite terms. We know that the outcomes of these debates have results that are anything but abstract, and that it is essential that people from all corners and levels of society participate. Effective stories that inform and engage readers can be a valuable means of expanding participation in science policy development. If you want to see how, you can begin reading on the next page.

From the Hill

Details of administration’s proposed FY2015 budget

Officially released March 4, President Obama’s FY2015 budget makes clear the challenges for R&D support currently posed by the Budget Control Act spending caps. With hardly any additional room available in the discretionary budget above FY 2014 levels, and with three-quarters of the post-sequester spending reductions still in place overall, many agency R&D budgets remain essentially constant. Some R&D areas such as climate research and support for fundamental science that have been featured in past budgets did not make much fiscal headway in this year’s request. Nevertheless, the administration has managed to shift some additional funding to select programs such as renewable energy and energy efficiency, advanced manufacturing, and technology for infrastructure and transportation.

An added twist, however, is the inclusion of $5.3 billion in additional R&D spending above and beyond the current discretionary caps that is part of what the administration calls the Opportunity, Growth, and Security Initiative (OGSI). This extra funding would make a significant difference for science and innovation funding throughout government. Congress, however, has shown little interest in embracing it.

Without the OGSI the president’s proposed FY2015 budget includes a small reduction in R&D funding in constant dollars. Current AAAS estimates place R&D in the president’s request at $136.5 billion (see Table 1). This represents a 0.7% increase above FY 2014 levels but is actually a slight decrease when the 1.7% inflation rate is considered. It also represents a 3.8% increase above FY 2013 post-sequester funding levels, but with inflation the total R&D budget is almost unchanged from FY 2013.

The Department of Defense (DOD) R&D is proposed at $70.8 billion or 0.3% above FY 2014 levels. This is due to boosts in R&D at the National Nuclear Security Administration (NNSA) that offsets cuts in other DOD R&D programs. Nondefense R&D is proposed at $65.7 billion, a 1.2% increase above FY 2014 levels.

Total research funding, which includes basic and applied research, would fall to $65.9 billion, a cut of $1.1 billion or 1.7% below FY 2014 levels, and only about 1.1% above FY 2013 post-sequester levels after inflation. This is in large part due to cuts in defense and National Aeronautics and Space Administration (NASA) research activities, though some NASA research has also been reclassified as development, which pushes the number lower without necessarily reflecting a change in the actual work.

Conversely, development activities would increase by $2.1 billion or 3.2%, due to increases in these activities at DOD, NASA, and the Department of Energy (DOE).

The $56-billion OGSI initiative would include $6.3 billion for R&D, which would mean a 4.6% increase from FY2014.

R&D spending should be understood in the larger context of the federal budget. The discretionary spending (everything except Medicare, Medicaid, and Social Security) share of the budget has shrunk to 30.4% and is projected to reach 24.6% in 2019. R&D outlays as a share of the budget would drop to 3.4%, a 50-year low.

Under the president’s proposal only a few agency R&D budgets, including those at DOE, the U.S. Geological Survey (USGS), the National Institute of Standards and Technology (NIST), and the Department of Transportation (DOT), stay ahead of inflation, but many will be above sequester levels, and total R&D requests increased more than the average for discretionary spending.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

TABLE 1

R&D in the FY 2015 budget by agency (budget authority in millions of dollars)

20

Source: OMB R&D data, agency budget justifications, and agency budget documents. Does not include Opportunity, Growth, and Security Initiative funding (see Table II-20). Note: The projected GDP inflation rate between FY 2014 and FY 2015 is 1.7 percent. All figures are rounded to the nearest million. Changes calculated from unrounded figures.

At DOE, the energy efficiency, renewable energy, and grid technology programs are marked for significant increases, as is the Advanced Research Projects Agency-Energy (ARPA-E); the Office of Science is essentially the same; and nuclear and fossil energy technology programs are reduced.

The proposed budget includes an increase of more than 20% for NASA’s Space Technology Directorate, which seeks rapid public-private technology development. Cuts are proposed in development funding for the next-generation crew vehicle and launch system.

Department of Agriculture extramural research would receive a large increase even as the agency’s intramural research funding is trimmed, though significantly more funding for both is contained within the OGSI.

The DOD science & technology budget, which includes basic and applied research, advanced technology development, and medical research funded through the Defense Health Program, would be cut by $1.4 billion or 10.3% below FY 2014 levels. A 57.8% cut in medical research is proposed, but Congress is likely to restore much of this funding, as it has in the past. The Defense Advanced Research Projects Agency is slated for a small increase.

The National Institute of Health (NIH) would continue on a downward course. The president’s request would leave the NIH budget about $4.1 billion in constant dollars or 12.5% below the FY 2004 peak. Some of the few areas seeing increased funding at NIH would include translational science, neuroscience and the BRAIN Initiative, and mental health. The additional OGSI funding would nearly, but not quite, return the NIH budget to pre-sequestration levels.

The apparently large cut in Department of Homeland Security (DHS) R&D funding is primarily explained by the reduction in funding for construction of the National Bio and Agro-Defense Facility, a Biosafety Level 4 facility in Kansas. Other DHS R&D activities would be cut a little, and the Domestic Nuclear Detection Office would receive a funding increase.

One bright note in this constrained fiscal environment is that R&D spending fared better than average in the discretionary budget, Looking ahead, there is much more cause for concern. Unless Congress takes action, the overall discretionary budget will return to sequester levels in FY2016 and remain there for the rest of the decade.

In brief

  • On April 24, the National Science Board issued a statement articulating concerns over some portions of the Frontiers in Innovation, Research, Science, and Technology Act (FIRST Act; H.R. 4186) which would reauthorize funding for NSF, among other things. The board expressed its “greatest” concern that “the bill’s specification of budget allocations to each NSF Directorate would significantly impede NSF’s flexibility to deploy its funds to support the best ideas in fulfillment of its mission.”
  • On April 28, the House passed the Digital Accountability and Transparency Act (S. 994; also known as the DATA Act), sending the bill to President Obama for signature. The bill seeks to improve the “availability, accuracy, and usefulness” of federal spending information by setting standards for reporting government spending on contracts, grants, etc. The legislation would also require that the Office of Management and Budget develop a two-year pilot program to evaluate reporting by recipients of federal grants and contracts and to reduce duplicative reporting requirements.
  • On April 22, the U.S. Supreme Court ruled to uphold the state of Michigan’s ban on using race as a factor in admissions for higher education institutions. In a 6-2 ruling, the Court determined that it is not in violation of the U.S. Constitution for states to prohibit public colleges and universities from using forms of racial preferences in admissions. In his opinion, Justice Anthony M. Kennedy stated: “This case is not about how the debate about racial preferences should be resolved. It is about who may resolve it. There is no authority in the Constitution of the United States or in this court’s precedents for the judiciary to set aside Michigan laws that commit this policy determination to the voters.”
  • On March 27, Senate Judiciary Committee Chairman Patrick Leahy (D-VT) and Senator John Cornyn (R-TX) introduced legislation on forensic science. The Criminal Justice and Forensic Science Reform Act (S. 2177) “promotes national accreditation and certification standards and stronger oversight for forensic labs and practitioners, as well as the development of best practices and a national forensic science research strategy.” The bill would create an Office of Forensic Science within the Office of the Deputy Attorney General at the Department of Justice and would also require that the office coordinate with NIST. It would require that forensic science personnel who work in laboratories that receive federal funding be certified in their fields and that all forensic science labs that receive federal funding be accredited according to standards set by a Forensic Science Board.

How Hurricane Sandy Tamed the Bureaucracy

ADAM PARRIS

A practical story of making science useful for society, with lessons destined to grow in importance.

Remember Hurricane Irene? It pushed across New England in August 2011, leaving a trail of at least 45 deaths and $7 million in damages. But just over a year later, even before the last rural bridge had been rebuilt, Hurricane Sandy plowed into the New Jersey–New York coast, grabbing the national spotlight with its even greater toll of death and destruction. And once again, the region—and the nation—swung into rebuild mode.

Certainly, some rebuilding after such storms will always be necessary. However, this one-two punch underscored a pervasive and corrosive aspect of our society: We have rarely taken the time to reflect on how best to rebuild developed areas before the next crisis occurs, instead committing to a disaster-by-disaster approach to rebuilding.

Yet Sandy seems to have been enough of a shock to stimulate some creative thinking at both the federal and regional levels about how to break the cycle of response and recovery that developed communities have adopted as their default survival strategy. I have witnessed this firsthand as part of a team that designed a decision tool called the Sea Level Rise Tool for Sandy Recovery, to support not just recovery from Sandy but preparedness for future events. The story that has emerged from this experience may contain some useful lessons about how science and research can best support important social decisions about our built environment. Such lessons are likely to be of increasing importance as predicted climate change brings the inevitability of extreme weather events.

A story of cooperation

In the wake of Sandy, pressure mounted at all levels, from local to federal, to address one question: How would we rebuild? This question obviously has many dimensions, but one policy context cuts across them all. The National Flood Insurance Program provides information on flood risk that developers, property owners, and city and state governments are required to use in determining how to build and rebuild. Run by the Federal Emergency Management Agency (FEMA), the program provides information on the height of floodwaters, known as flood elevations, that can be used to delineate on a map where it is more or less risky to build. Flood elevations are calculated based on analysis of how water moves over land during storms of varying intensity, essentially comparing the expected elevation of the water surface to that of dry land. FEMA then uses this information to create flood insurance rate maps, and insurers use the maps to determine the cost of insurance in flood-prone areas. The cost of insurance and the risk of flooding are major factors for individuals and communities in determining how high to build structures and where to locate them to avoid serious damage during floods.

But here’s the challenge that our team faced after Sandy. The flood insurance program provided information on flood risk based only on conditions in past events, and not on conditions that may occur tomorrow. Yet coastlines are dynamic. Beaches, wetlands, and barrier islands all change in response to waves and tides. These natural features shift, even as the seawalls and levees that society builds to keep communities safe are designed to stay in place. In fact, seawalls and levees add to the complexity of the coastal environment and lead to new and different changes in coastal features. The U.S. Army Corps of Engineers implements major capital works, including flood protection and beach nourishment, to manage these dynamic features. The National Oceanic and Atmospheric Administration (NOAA) helps communities manage the coastal zone to preserve the amenities we have come to value on the coast: commerce, transportation, recreation, and healthy ecosystems, among others. And both agencies have long been doing research on another major factor of change for coastlines around the world: sea-level rise.

Any amount of sea-level rise, even an inch or two, increases the elevation of floodwaters for a given storm. Estimates of future sea-level rise are therefore a critical area of research. As Sandy approached, experts from NOAA and the Army Corps, other federal agencies, and several universities were completing a report synthesizing the state of the science on historic and future sea-level rise. The report, produced as part of a periodic updating of the National Climate Assessment, identified scenarios (plausible estimates) of global sea-level rise by the end of this century. Coupled with the best available flood elevations, the sea-level rise scenarios could help those responsible for planning and developing in coastal communities factor future risks into their decisions. This scenario-planning approach underscores a very practical element of risk management: If there’s a strong possibility of additional risk in the future, factor that into decisions today.

Few people would argue with taking steps to avoid future risk. But making this happen is not as easy as it sounds. FEMA has to gradually incorporate future flood risk information into the regulatory program even as the agency modernizes existing flood elevations and maps. The program dates back to 1968, and much of the information on flood elevations is well over 10 years old. We now have newer information on past events, more precise measurements on the elevation of land surfaces, and better understanding of how to model and map the behavior of floodwaters. We also have new technologies for providing the information via the Internet in a more visually compelling and user-specific manner. Flood elevations and flood insurance rate maps have to be updated for thousands of communities across the nation. When events like Sandy happen, FEMA issues “advisory” flood elevations to provide updated and improved information to the affected areas even if the regulatory maps are not finalized. However, neither the updated maps nor the advisory elevations have traditionally incorporated sea-level rise.

Only in 2012 did Congress pass legislation—the Biggert-Waters Flood Insurance Reform Act—authorizing FEMA to factor sea-level rise into flood elevations provided by the flood insurance program, so the agency has had little opportunity to accomplish this for most of the nation. Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

Of course, reacting to any additional flood risk resulting from higher sea levels might entail the immediate costs of building higher, stronger, or in a different location altogether. But such short-term costs are counterbalanced by the long-term benefits of health and safety and a smaller investment in maintenance, repair, and rebuilding in the wake of a disaster. So how does the federal government provide legitimate science—science that is seen by decisionmakers as reliable and legitimate—regarding future flood risk to affected communities? And how might it create incentives, financial and otherwise, for adopting additional risk factors that may mean up-front costs in return for major long-term gains?

After Sandy, leaders of government locally and nationally were quick to recognize these challenges. President Barack Obama established a Hurricane Sandy Rebuilding Task Force. Governor Mario Cuomo of New York established several expert committees to help develop statewide plans for recovery and rebuilding. Governor Chris Christie of New Jersey was quick to encourage higher minimum standards for rebuilding by adding 1 foot to FEMA’s advisory flood elevations. And New York City Mayor Michael Bloomberg created the Special Initiative on Risk and Resilience, connected directly to the city’s long-term planning efforts and to an expert panel on climate change, to build the scientific foundation for local recovery strategies.

Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

The leadership and composition of the groups established by the president and the mayor were particularly notable and distinct from conventional efforts. They brought expertise and emphasis that focused as strongly on preparedness for a future that is likely to look different from the present, as on responding to the disaster itself. For example, the president’s choice of Shaun Donovan, secretary of the Department of Housing and Urban Development (HUD), to chair the federal task force implicitly signaled a new focus on ensuring that urban systems will be resilient in the face of future risks.

New York City’s efforts have been exemplary in this regard. The organizational details are complex, but there is one especially crucial part of the story that I want to tell. When Mayor Bloomberg created the initiative on risk and resilience, he also reconvened the New York City Panel on Climate Change (known locally as the NPCC), which had been begun in 2008 to support the formulation of a long-term comprehensive development and sustainability plan, called PlaNYC. All of these efforts, which were connected directly to the Mayor’s Office of Long-term Planning and Sustainability, were meant to be forward-looking and to integrate contributions from experts in planning, science, management, and response.

Tying the response to Sandy to the city’s varied efforts signaled a new approach to post-disaster development that embraced long-term resilience: the capacity to be prepared for an uncertain future. In particular, the NPCC’s role was to ensure that the evolving vulnerabilities presented by climate change would play an integral part in thinking about New York in the post-Sandy era. To this end, in September 2012, the City Council of New York codified the operations of the NPCC into the city’s charter, calling for periodic updates of the climate science information. Of course, science-based groups such as the climate panel should be valuable for communities and decisionmakers thinking about resilience and preparedness, but often they are ignored. Thus, another essential aspect of New York’s approach was that the climate panel was not just a bunch of experts speaking from a pulpit of scientific authority, but it also had members representing local and state government working as full partners.

Within NOAA, there are programs designed to improve decisions on how to build resilience into society, given the complex and uncertain interactions of a changing society and a changing environment. These programs routinely encourage engagement among different scales and sectors of government and resource management. For example, NOAA’s Regional Integrated Sciences and Assessments (RISA) program provides funding for experts to participate in New York’s climate panel to develop risk information that informs both the response to Sandy and the conceptual framework for adaptively managing long-term risk within PlaNYC. Through its Coastal Services Center, NOAA also provides scientific tools and planning support for coastal communities facing real-time challenges. When Sandy occurred, the center offered staff support to FEMA’s field offices that were the local hubs among emergency management and disaster relief. Such collaboration and interactions between the RISA experts, the center staff, and the FEMA field offices fostered social relations that allowed for coordination in developing the Sea Level Rise Tool for Sandy Recovery.

In still other efforts, representatives of the president’s Hurricane Sandy Rebuilding Task Force and the Council on Environmental Quality were working with state and local leaders, including staff from the New York City’s risk and resilience initiative. The leaders of the New York initiative were working with representatives of NOAA’s RISA program, as well as with experts on the NPCC who had participated in producing the latest sea-level rise scenarios for the National Climate Assessment. The Army Corps participated in the president’s Task Force and also contributed to the sea-level rise scenarios report. This complex organizational ecology also helped create a social network among professionals in science, policy, and management charged with building a tool that can identify the best available science on sea-level rise and coastal flooding to support recovery for the region.

We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment.

Before moving on to the sea-level rise tool itself, I want to point out important dimensions of this social network and the context that facilitated such complex organizational coordination. Sandy presented a problem that motivated people in various communities of practice to work with each other. We all knew each other, wanted to help recovery efforts, and understood the limitations of the flood insurance program. In the absence of events such as Sandy, it is difficult to find such motivating factors; everyone is busy with his or her day-to-day responsibilities. Disaster drew people out of their daily routines with a common and urgent purpose. Moreover, programs such as RISA have been doing research not just to provide information on current and future risks associated with climate, but also to understand and improve the processes by which scientific research can generate knowledge that is both useful and actually used. Research on integrated problems and management across institutions and sectors is undervalued; how best to organize and manage such research is poorly understood in the federal government. Those working on this problem themselves constitute a growing community of practice.

Communities need to be able to develop long-term planning initiatives, such as New York’s PlaNYC, that are supported by bodies such as the city’s climate change panel. In order to do so, they have to establish networks of experts with whom they can develop, discuss, and jointly produce knowledge that draws on relevant and usable scientific information. But not all communities have the resources of New York City or the political capacity to embrace climate hazards. If the federal government wishes to support other communities in better preparing people for future disasters, it will have to support the appropriate organizational arrangements—especially those that can bridge boundaries between science, planning, and management.

Rising to the challenges

For more than two decades, the scientific evidence has been strong enough to enable estimates of sea-level rise to be factored into planning and management decisions. For example, NOAA maintains water-level stations (often referred to as tide gages) that document sea-level change, and over the past 30 years, 88% of the 128 stations in operation have recorded a rise in sea level. Based on such information, the National Research Council published a report in 1987 estimating that sea level would rise between 0.5 and 1.5 meters by 2100. More recent estimates suggest it could be even higher.

Of course, many coastal communities have long been acutely aware of the gradual encroachment of the sea on beaches and estuaries, and the ways in which hurricanes and tropical storms can remake the coastal landscape. So, why is it so hard to decide on a scientific basis for incorporating future flood risk into coastal management and development?

For one thing, sea-level rise is different from coastal flooding, and the science pertaining to each is evolving somewhat independently. Researchers worldwide are analyzing the different processes that contribute to sea-level rise. They are thinking about, among other things, how the oceans will expand as they absorb heat from the atmosphere; about how quickly ice sheets will melt and disintegrate in response to increasing global temperature, thereby adding volume to the oceans; and about regional and local processes that cause changes in the elevation of the land surface independent of changes in ocean volume. Scientists are experimenting, and they cannot always experiment together. They have to isolate questions about the different components of the Earth system to be able to test different assumptions, and it is not an easy task to put the information back together again. This task of synthesizing knowledge from various disciplines and even within closely related disciplines requires interdisciplinary assessments.

The sea-level rise scenarios that our team used in designing the Sandy tool, which derived from the National Climate Assessment prepared for Congress every four years to help synthesize and summarize the state of the climate and its impacts on society, varied greatly. The scenarios were based on expert judgments from the scientific literature by a diverse team drawn from the fields of climate science, oceanography, geology, engineering, political science, and coastal management, and representing six federal agencies, four universities, and one local resource management organization. The scenarios report provided a definitive range of 8 inches to 6.6 feet by the end of the century. (One main reason for such different projections is the current inadequate understanding of the rate at which the ice sheets in Greenland and Antarctica are melting and disintegrating in response to increasing air temperature.) The scenarios were aimed at two audiences: regional and local experts who are charged with addressing variations in sea-level change at specific locations, and national policymakers who are reconsidering potential impacts beyond any individual community, city, or even state.

But wasn’t the choice of the experts who prepared the scenarios to present such a broad range of sea-level rise estimates simply adding to policymakers’ uncertainty about the future? The authors addressed this possible concern by associating risk tolerance—the amount of risk one would be willing to accept for a particular decision—with each scenario. For example, they said that anyone choosing to use the lowest scenario is accepting a lot of risk, because there is a wealth of evidence and agreement among experts that sea-level rise will exceed this estimate by the end of the century unless (and possibly even if) aggressive global emissions reduction measures are taken immediately. On the other hand, they said that anyone choosing to use the highest scenario is using great caution, because there is currently less evidence to support sea-level rise of this magnitude by the end of the century (although it may rise to such levels in the more distant future).

Thus, urban planners may want to consider higher scenarios of sea-level rise, even if they are less likely, because this approach will enable them to analyze and prepare for risks in an uncertain future. High sea-level rise scenarios may even provide additional factors of safety, particularly where the consequences of coastal flood events threaten human health, human safety, or critical infrastructure—or perhaps all three. The most likely answer might not always be the best answer for minimizing, preparing for, or avoiding risk. Framing the scenarios in this fashion helps avoid any misperceptions about exaggerating risk. But more importantly, it supports deliberation in planning and making policy about the basis for setting standards and policies and for designing new projects in the coastal zone. The emphasis shifts to choices about how much or how little risk to accept.

In contrast to the scenarios developed for the National Climate Assessment, the estimates made by the New York City climate panel addressed regional and local variations in sea-level rise and are customized to support design and rebuilding decisions in the city that respond to risks over the next 25 to 45 years. They were developed after Sandy by integrating scientific findings published just the previous year—after the national scenarios report was released. The estimates were created using a combination of 24 state-of-the-art global climate models, observed local data, and expert judgment. Each climate model can be thought of as an experiment that includes different assumptions about global-scale processes in the Earth system (such as changes in the atmosphere). As with the national scenarios report, then, the collection of models provides a range of estimates of sea-level rise that in total convey a sense of the uncertainties. The New York City climate panel held numerous meetings throughout the spring of 2013 to discuss the model projections and to frame its own statements about the implications of the results for future risks to the city arising from sea-level rise (e.g., changes in the frequency of coastal flooding due to sea-level rise). These meetings were attended by not only physical and social scientists but also by decisionmakers facing choices at all stages of the Sandy rebuilding process, from planning to design to engineering and construction.

As our team developed the sea-level rise tool, we found minimal difference between the models used by the New York climate panel and the nationally produced scenarios. At most, the extreme national scenarios and the high-end New York projections were separated by 3 inches, and the intermediate scenarios and the mean model values were separated by 2 inches. This discrepancy is well within the limits of accuracy reflected in current knowledge of future sea-level rise. But small discrepancies can make a big difference in planning and policymaking.

New York State regulators evaluating projects proposed by organizations that manage critical infrastructure, such as power plants and wastewater treatment facilities, look to science vetted by the federal government as a basis for approving new or rebuilt infrastructure. Might the discrepancies between the scenarios produced for the National Climate Assessment and the projections made by the NPCC, however small, cause regulators to question the scientific and engineering basis for including future sea-level rise in their project evaluations? Concerned about this prospect, the New York City Mayor’s Office wanted the tool to use only the projections of its own climate panel.

The complications didn’t stop there. In April 2013, HUD Secretary Donovan announced a Federal Flood Risk Reduction Standard, developed by the Hurricane Sandy Rebuilding Task Force, for federal agencies to use in their rebuilding and recovery efforts in the regions affected by Sandy. The standard added 1 foot to the advisory flood elevations provided by the flood insurance program. Up to that point, our development team had been working in fairly confidential settings, but now we had to consider additional questions. Would the tool be used to address regulatory requirements of the flood insurance program? Why use the tool instead of the advisory elevations or the Federal Flood Risk Reduction Standard? How should decisionmakers deal with any differences between the 1-foot advisory elevation and the information conveyed by the tool? We spent the next two months addressing these questions and potential confusion over different sets of information about current and future flood risk.

Our team—drawn from NOAA, the Army Corps, FEMA, and the U.S. Global Change Research Program—released the tool in June 2013. It provides both interactive maps depicting flood-prone areas and calculators for estimating future flood elevations, all under different scenarios of sea-level rise. Between the time of Secretary Donovan’s announcement and the release of the tool, the team worked extensively with representatives from FEMA field offices, the New York City climate panel, the New York City Mayor’s Office, and the New York and New Jersey governors’ offices to ensure that the choices about the underlying scientific information were well understood and clearly communicated. The social connections were again critical in convening the right people from the various levels of government and the scientific and practitioner communities.

During this period, the team made key changes in how the tool presented information. For example, the Hurricane Sandy Rebuilding Task Force approved the integration of sea-level rise estimates from the New York climate panel into the tool, providing a federal seal of approval that could give state regulators confidence in the science. This decision also helped address the minimal discrepancies between the long-term scenarios of sea-level rise made for the National Climate Assessment and the shorter-term estimates made by the New York climate panel. The President’s Office of Science and Technology Policy also approved expanding access to the tool via a page on the Global Change Research Program’s Web site [http://www.globalchange.gov/what-we-do/assessment/coastal-resilience-resources]. This access point helped distinguish the tool as an interagency product separate from the National Flood Insurance Program, thus making clear that its use was advisory, not mandated by regulation. Supporting materials on the Web site (including frequently asked questions, metadata, planning context, and disclaimers, among others) provided background detail for various user communities and also helped to make clear that the New York climate panel sea-level rise estimates were developed through a legitimate and transparent scientific process.

The process of making the tool useful for decisionmakers involved diverse players in the Sandy recovery story discussing different ideas about how people and organizations were considering risk in their rebuilding decisions. For example, our development team briefed a diverse set of decisionmakers in the New York and New Jersey governments to facilitate deliberations about current and future risk. Our decision to use the New York City climate panel estimates in the tool helped to change the recovery and rebuilding process from past- to future-oriented, not only because the science was of good quality but because integration of the panel’s numbers into the tool brought federal, state, and city experts and decisionmakers together, while alleviating the concerns of state regulators about small discrepancies between different sea-level rise estimates.

In 2013, New York City testified in a rate case (the process by which public utilities set rates for consumers) and called for Con Edison (the city’s electric utility) and the Public Service Commission to ensure that near-term investments are made to fortify utility infrastructure assets. Con Edison has planned for $1 billion in resiliency investments that address future risk posed by climate change. As part of this effort, the utility has adopted a design criteria that uses FEMA’s flood insurance rate maps that are based on 100-year flood elevations, plus 3 feet to account for a high-end estimate of sea-level rise by mid-century. This marked the first time in the country that a rate case explicitly incorporated consideration of climate change.

New York City also passed 16 local laws in 2013 to improve building codes in the floodplain, to protect against future risk of flooding, high winds, and prolonged power outages. For example, Local Law 96/2013 adopted FEMA’s updated flood insurance rate maps with additional safety standards for some single-family homes, based on sea-level rise as projected by the NPCC.

Our development team would never have known about New York City’s need to develop a rate case with federally vetted information on future risk, if we had not worked with officials from the city’s planning department. Engaging city and state government officials was useful not just for improving the clarity and purpose of the information in the tool. It was also useful for choosing what information would be included in the tool to enable a comprehensive and implementable strategy.

The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used.

Different scales of government—local, state, and federal—have to be able to lead processes for bringing appropriate knowledge and standards into planning, design, and engineering. Conversely, all scales of government need to validate the standards revealed by these processes, because they all play a role in implementation.

Building resilience capacity

This complex story has a particularly important yet unfamiliar lesson: Planning departments are key partners in helping break the cycle of recovery and response, and in helping people adopt lessons learned from science into practice. Planners at different levels of government convene different communities of practice and disciplinary expertise around shared challenges. Coincidentally, scientific organizations that cross the boundaries between these different communities—such as the New York City climate panel and the team that developed the sea-level rise tool—can also encourage those interactions. As I’ve tried to illustrate, planning departments convene scientists and decisionmakers alike to work across organizational boundaries that under normal circumstances help to define their identities. These are important ingredients for preparing for future natural disasters and increasing our resilience to them over the long term, and yet this type of science capacity is barely supported by the federal government. How might the lessons from the Sandy Sea Level Rise Recovery Tool and Hurricane Sandy be more broadly adopted to help the nation move away from disaster-by-disaster policy and planning? Here are two ideas to consider in the context of coastal resilience.

First, re-envision the development of resilient flood standards as planning processes, not just numbers or codes.

Planning is a comprehensive and iterative function in government and community development. Planners are connected to or leading the development of everything from capital public works projects to regional plans for ecosystem restoration. City waterfronts, wildlife refuges and restored areas, and transportation networks all draw the attention of planning departments.

In their efforts, planners seek to keep development goals rooted in public values, and they are trained, formally and informally, in the process of civic engagement, in which citizens have a voice in shaping the development of their community. Development choices include how much risk to accept and whether or how the federal government regulates those choices. For this reason, planners maintain practical connections to existing regulations and laws and to the management of existing resources. Their position in the process of community development and resource management requires planners to also be trained in applying the results of research (such as sea-level rise scenarios) to design and engineering. Over the past decade, many city, state, and local governments have either explicitly created sustainability planner positions in high levels (such as mayors’ or governors’ offices) or reframed their planning departments to emphasize sustainability, as in the case of New York City. The planners in these positions are incredibly important for building resilience into urban environments; not because they see the future, but because they provide a nucleus for convening the diverse constituencies from which visions of, and pathways to, the future are imagined and implemented.

If society is to be more resilient, planners must be critical actors in government. We cannot expect policymakers and the public to simply trust or comprehend or even find useful what we learn from science. We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment. And yet, despite their critical role in achieving resilience, many local planning departments across the country have been eliminated during the economic downturn.

Second, configure part of our research and service networks to be flexible in response to emergent risk.

The federal government likes to build new programs, sometimes at the expense of working through existing ones, because new initiatives can be political instruments for demonstrating responsiveness to public needs. But recovery from disasters and preparation to better respond to future disasters can be supported through existing networks. Across the span of lands under federal authority, FEMA has regional offices that work with emergency managers, and NOAA supports over 50 Sea Grant colleges that engage communities in science-based discussions on issues related to coastal management. Digital Coast, a partnership between NOAA and six national, regional, and state planning and management organizations, provides timely information on coastal hazards and communities. These organizations work together to develop knowledge and solutions for planners and managers in coastal zones, in part by funding university-based science-and-assessment teams. The interdisciplinary expertise and localized focus of such teams help scientists situate climate and weather information in the context of ongoing risks such as sea-level rise and coastal flooding. All of these efforts contributed directly and indirectly to the Sea Level Rise Tool before, during, and after Hurricane Sandy.

The foundational efforts of these programs exemplify how science networks can leverage their relationships and expertise to get timely and credible scientific information into the hands of people who can benefit from it. Rather than creating new networks or programs, the nation could support efforts explicitly designed to connect and leverage existing networks for risk response and preparation. The story I’ve told here illustrates how existing relationships within and between vibrant communities of practice are an important part of the process of productively bringing science and decisionmaking together. New programs are much less effective in capitalizing on those relationships.

One way to support capacities that already exist would be to anticipate the need to distribute relief funds to existing networks. This idea could be loosely based on the Rapid Response Research Grants administered by the National Science Foundation, with a couple of important variations from its usual focus on supporting basic research. Agencies could come together to identify a range of planning processes supported by experts who work across communities of practice to ensure a direct connection to preparedness for future natural disasters of the same kind. These priority-setting exercises might build on the interagency discussions that occur as part of the federal Global Change Research Program. Also, since any such effort would require engagement between decisionmakers and scientists, recipients of this funding would be asked to report on the nature of additional, future engagement. What further engagement is required? Who are the critical actors, and are they adequately supported to play a role in resilience efforts? How are those networks increasing resilience over time? Gathering information about questions such as these is critical for the federal government to make science policy decisions that support a sustainable society.

Working toward a collective vision

The shift from reaction and response to preparedness seems like common sense, but as this story illustrates, it is complicated to achieve. One reaction to this story might be to replicate the technology in the sea-level rise tool or to apply the same or similar information sets elsewhere. The federal government has already begun such efforts, and this approach will supply people with better information.

Yet across the country, there are probably hundreds of similar decision tools developed by universities, nongovernmental organizations, and businesses that depict coastal flooding resulting from sea-level rise. The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used. By connecting those discussions to existing planning processes, we reached different scales of government with different responsibilities and authority for reaching the overarching goal of developing more sustainable urban and coastal communities.

This story suggests that the role of science in helping society to better manage persistent environmental problems such as sea-level rise is not going to emerge from research programs isolated from the complex social and institutional settings of decisionmaking. Science policies aimed at achieving a more sustainable future must increasingly emphasize the complex and time-consuming social aspects of bringing scientific advance and decisionmaking into closer alignment.

Adam Parris is program manager of Regional Integrated Sciences and Assessments at the National Oceanic and Atmospheric Administration.

Breaking the Climate Deadlock

DAVID GARMAN

KERRY EMANUEL

BRUCE PHILLIPS

Developing a broad and effective portfolio of technology options could provide the common ground on which conservatives and liberals agree.

The public debate over climate policy has become increasingly polarized, with both sides embracing fairly inflexible public positions. At first glance, there appears little hope of common ground, much less bipartisan accord. But policy toward climate change need not be polarizing. Here we offer a policy framework that could appeal to U.S. conservatives and progressives alike. Of particular importance to conservatives, we believe, is the idea embodied in our framework of preserving and expanding, rather than narrowing, societal and economic options in light of an uncertain future.

This article reviews the state of climate science and carbon-free technologies and outlines a practical response to climate deadlock. Although it may be difficult to envision the climate issue becoming depoliticized to the point where political leaders can find common ground, even the harshest positions at the polar extremes of the current debate need not preclude the possibility.

We believe that a close look at what is known about climate science and the economic competitiveness of low-carbon/carbon-free technologies—which include renewable energy, advanced energy efficiency technologies, nuclear energy, and carbon capture and sequestration systems (CCS) for fossil fuels—may provide a framework that could even be embraced by climate skeptics willing to invest in technology innovation as a hedge against bad climate outcomes and on behalf of future economic vitality.

Most atmospheric scientists agree that humans are contributing to climate change. Yet it is important to also recognize that there is significant uncertainty regarding the pace, severity, and consequences of the climate change attributable to human activities; plausible impacts range from the relatively benign to globally catastrophic. There is also tremendous uncertainty regarding short-term and regional impacts, because the available climate models lack the accuracy and resolution to account for the complexities of the climate system.

Although this uncertainty complicates policymaking, many other important policy decisions are made in conditions of uncertainty, such as those involving national defense, preparation for natural disasters, or threats to public health. We may lack a perfect understanding of the plans and capabilities of a future adversary or the severity and location of the next flood or the causes of a new disease epidemic, but we nevertheless invest public resources to develop constructive, prudent policies and manage the risks surrounding each.

Reducing atmospheric concentrations of greenhouse gases (GHGs) would require widespread deployment of carbon-free energy technologies and changes in land-use practices. Under extreme circumstances, addressing climate risks could also require the deployment of climate remediation technologies such as atmospheric carbon removal and solar radiation management. Unfortunately, leading carbon-free electric technologies are currently about 30 to 290% more expensive on an unsubsidized basis than conventional fossil fuel alternatives, and technologies that could remove atmospheric carbon from the atmosphere or mitigate climate impacts are mostly unproven and some may have dangerous consequences. At the same time, the pace of technological change in the energy sector is slow; any significant decarbonization will unfold over the course of decades. These are fundamental hurdles.

It is also reasonably clear, particularly after taking into account the political concerns about economic costs, that widespread deployment of carbon-free technologies will not take place until diverse technologies are fully demonstrated at commercial scale and the cost premium has been reduced to a point where the public views the short-term political and economic costs as being reasonably in balance with plausible longer-term benefits.

Given these twin assessments, we propose a practical approach to move beyond climate deadlock. The large cost premium and unproven status of many technologies point to a need to focus on innovation, cost reduction, and successfully demonstrating multiple strategically important technologies at full commercial scale. At the same time, the uncertainty of long-term climate projections, together with the 1000+ year lifetime of CO2 in the atmosphere, argues for a measured and flexible response, but one that can be ramped up quickly.

This can be done by broadening and intensifying efforts to develop, fully demonstrate, and reduce the cost of a variety of carbon-free energy and climate remediation technologies, including carbon capture and sequestration and advanced nuclear, renewable, and energy efficiency technologies. In addition, atmospheric carbon removal and solar radiation management technologies should be carefully researched.

Conservatives have typically been strong supporters of fundamental government research, as well as technology development and demonstration in areas that the private sector does not support, such as national security and health. Also, even the most avowed climate skeptic will often concede that there are risks of inaction, and that it is prudent for national and global leaders to hedge against those risks, just as a prudent corporate board of directors will hedge against future risks to corporate profitability and solvency. Moreover, increasing concern about climate change abroad suggests potentially large foreign markets for innovative energy technologies, thus adding an economic competitiveness rationale for investment that does not depend on one’s assessment of climate risk.

Some renewed attention is being devoted to innovation, but funding is limited and the scope of technologies is overly constrained. Our suggested policy approach, in contrast, would involve a three- to fivefold increase in R&D and demonstration spending in both the public and private sectors, including possible new approaches that involve more than simply providing the funding through traditional channels such as the Department of Energy (DOE) and the national labs.

Investing in the development of technology options is a measured, flexible approach that could also shorten the time needed to decarbonize the economy. It would give future policymakers more opportunities to deploy proven, lowercost technologies, without the commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomic. And with greater emphasis on innovation, it would allow technologies to be deployed more quickly, broadly, and cost-effectively, which would be particularly important if impacts are expected to be rapid and severe.

In addition to research, development, and demonstration (RD&D), new policy options to support technology deployment should be explored. Current deployment programs principally using the tax code have not, at least to date, successfully commercialized technologies in a widespread and cost-effective manner or provided strong incentives for continued innovation. New approaches are necessary.

Climate knowledge

Although new research constantly adds to the state of scientific knowledge, the basic science of climate change and the role of human-generated emissions have been reasonably well understood for at least several decades. Today, most climate scientists agree that human-caused warming is underway. Some of the major areas of agreement include the following:

  • GHGs, which include water vapor, carbon dioxide (CO2), and other gases, trap heat in the atmosphere and warm the earth by allowing solar radiation to pass largely unimpeded to the surface of the earth and re-radiating a portion of the thermal radiation received from the earth back toward the surface. This is the “greenhouse effect.”
  • Paleoclimatology, which is the study of past climate conditions based on the geologic record, shows that changing levels of GHGs in the atmosphere have been associated with climatic change as far back as the geological record extends.
  • The concentration of CO2 in the atmosphere has increased from about 280 parts per million (ppm) in preindustrial times to about 400 ppm today, an increase of 43%. Ice core records suggest that the current level is higher than at any time over at least the past 650,000 years, whereas analysis of marine sediments suggests that CO2 levels have not been this high in at least 2.1 million years.
  • Human-made (anthropogenic) CO2 emissions, primarily resulting from the consumption of fossil fuels, are probably responsible for much of the warming observed in recent decades. Climate scientists attempting to replicate climate patterns over the past 30 years have not been able to do so without accounting for anthropogenic GHGs and sulfate aerosols.
  • CO2 emissions are also contributing to increases in surface ocean acidity, which degrades ocean habitats, including important commercial fisheries.
  • Given the current rate of global emissions, atmospheric concentrations of CO2 could reach twice the preindustrial level within the next 50 years, concentration levels our planet has not experienced in literally millions of years.
  • The global climate system has tremendous inertia. Due to the persistence of CO2 in the atmosphere and the oceans, many of the effects of climate change will not diminish naturally for hundreds of years if not longer.

About these basic points there is little debate, even from those who believe that the risks are not likely to be severe. Indeed, it is also true that long-term climate projections are subject to considerable uncertainty and legitimate scientific debate. The fundamental complexity of the climate system, in particular the feedback effects of clouds and water vapor, is the most important contributor to uncertainty. Consequently, long-term projections reflect considerable uncertainty in how rapidly, and to what extent, temperatures will increase over time. It is possible that the climate will be relatively slow to warm and that the effects of warming may be relatively mild for some time. But there is also a worrisome likelihood that the climate will warm too quickly for society to adapt and prosper—with severe or perhaps even catastrophic consequences.

Unfortunately, we should not expect the range of climate projections to narrow in a meaningful way soon; policymakers may hope for the best but must prepare for the worst.

Technology readiness

Under the best of circumstances, the risks associated with climate uncertainties could be managed, at least in part, with a mix of today’s carbon-free energy and climate remediation technologies. Carbon-free energy generation, as used in this paper, includes renewable, nuclear, and carbon capture and sequestration systems for fossil fuels such as coal and natural gas. Climate remediation technologies (often grouped together under the term “geoengineering”) include methods for removing greenhouse gases from the atmosphere (such as air capture), as well as processes that might mitigate some of the worst effects of climate change (such as solar radiation management). We note that energy efficiency or the pursuit of greater energy productivity is prudent even in the absence of climate risk, so it is particularly important in the face of it. Although this discussion focuses on electric generation, any effective decarbonization policy will also need to address emissions from the transportation sector; the residential, commercial, and industrial sectors; and land use. Similar frameworks, focused on expanding sensible options and hedging against a worst-case future, could be developed for each.

To be effective, carbon-free and climate remediation technologies and processes need to be economically viable, fully demonstrated at scale (if they have not yet been), and be capable of global deployment in a reasonably timely manner. Moreover, they would also need to be sufficiently diverse and economical to be deployed in varied regional economies across the world, ranging from the relatively low-growth developed world to the rapidly growing developing nations, particularly those with expanding urban centers such as China and India.

The list of strategically essential climate technologies is not long, yet each of these technologies, in its current state of development, is limited in important ways. Although their status and prospects vary in different regions of the world, they are either not yet fully demonstrated, not capable of rapid widespread global deployment, or unacceptably expensive relative to conventional energy technologies. These limitations are well documented, if not widely recognized or acknowledged. The limitations of current technologies can be illustrated by quickly reviewing the status of a number of major electricity-generating technologies.

On-shore wind and some other renewable technologies such as solar photovoltaic (PV) have experienced dramatic cost reductions over the past three decades. These cost reductions, along with deployment subsidies, have clearly had an impact. Between 2009 and 2013, U.S. wind output more than doubled, and U.S. solar output increased by a factor of 10. However, because ground-level winds are typically intermittent, wind turbines cannot be relied on to generate electricity whenever there is electrical demand, and the amount of generating output cannot be directly controlled in response to moment-by-moment changes in electric demand and the availability of other generating resources. As a consequence, wind turbines do not produce electrical output of comparable economic value to the output of conventional generating resources such as natural gas–fired power plants that are, in energy industry parlance, both “firm” and “dispatchable.” Furthermore, the cost of a typical or average onshore wind project in the United States, without federal and state subsidies, although now less than that of new pulverized coal plants, is still substantially more than a new gas-fired combined-cycle plant, which is generally considered the lowest-cost conventional resource in most U.S. power markets. Solar PV also suffers from its intermittency and variability, and significant penetration of solar PV can test grid reliability and complicate distribution system operation, as we are now seeing in Germany. Some of these challenges can be overcome with careful planning and coordinated execution, but the scale-up potential and economics of these resources could be improved substantially by innovations in energy storage, as well as technological improvements to increase renewables’ power yield and capacity factor.

Current light-water nuclear power technology is also more expensive than conventional natural gas generation in the United States, and suffers from safety concerns, waste disposal challenges, and proliferation risks in some overseas markets. Further, given the capital intensity and large scale of today’s commercial nuclear plants (which are commonly planned as two 1,000–megawatt (MW) generating units), the total cost of a new nuclear plant exceeds the market capitalization of many U.S. electric utilities, making sole-ownership investments a “bet-the-company” financial decision for corporate management and shareholders. Yet recent improvements in costs have been demonstrated in overseas markets through standardized manufacturing processes and economies of scale; and many new innovative designs promise further cost reductions, improved safety, a smaller waste footprint, and less proliferation risk.

CCS technology is also limited. Although all major elements of the technology have been demonstrated successfully, and the process is used commercially in some industrial settings and for enhanced oil recovery (EOR), it is only now on track to being fully demonstrated at two commercial-scale electric generation facilities under construction, one in the United States and one in Canada. And deploying CCS on existing electric power plants would reduce generation efficiency and increase production costs to the point where such CCS retrofits would be uneconomic today without large government incentives or a carbon price higher than envisioned in recent policy proposals.

The cost premium of these carbon-free technologies relative to that of conventional natural gas–fired combined cycle technology in the United States is illustrated in the next chart.

As shown, the total levelized cost of new natural gas combined-cycle generation over its expected operating life is roughly $67/MWh (MWh, megawatt-hour). In contrast, typical onshore wind projects (without federal and state subsidies and without considering the cost of backup power and other grid integration requirements) cost about $87/MWh. New gas-fired combined-cycle plants with CCS cost approximately $93/MWh and nuclear projects about $108/MWh. New coal plants with CCS, solar PV, and offshore wind projects are yet more costly. Taken together, these estimates generally point to a cost premium of $20 to $194/MWh, or 29 to 290%, for low carbon generation.

Some may argue that this cost premium is overstated because it does not reflect the cost of the carbon externality. This would be accurate from a conceptual economic perspective, but from a commercial or customer perspective, it is understated because it doesn’t account for the substantial costs of providing backup or stored power to overcome intermittency problems. The practical effect of this cost difference remains: However the cost premium might be reduced over time (whether through carbon pricing, other forms of regulation, higher fossil fuel prices, or technological innovation), the gap today is large enough to constitute a fundamental impediment to developing effective deployment policies.

This is evidenced in the United States by the wind industry’s continued dependence on federal tax incentives, the difficulty of securing federal or state funding for proposed utility-scale CCS projects, the slow pace of developing new nuclear plants, and the recent controversies in several states proposing to develop new offshore wind and coal gasification projects. The inability to pass federal climate legislation can also be seen as an indication of widespread concern about the cost of emissions reductions using existing technologies, the effectiveness of the legislation in the global long-term context, or both.

FIGURE 1

79

Source: EIA LCOE in AEO 2013

Cost considerations are even more fundamental in the developing world, where countries’ overriding economic goal is to raise their population’s standard of living. This usually requires inexpensive sources of electricity, and technologies that are only available at a large cost premium are unlikely to be rapidly or widely adopted.

Although there is little doubt that there are opportunities to reduce the cost and improve the performance of today’s technologies, the history of technological transformation in the energy sector is typically slow, unpredictable, and incremental because it widely employs long-lived capital-intensive production and infrastructure assets tied together through complex global industries—characteristics contributing to tremendous inertia. Engineering breakthroughs are rare, and new technologies typically take many decades to reach maturity at scale, sometimes requiring the development of new business models. As described by Arnulf Grübler and Nebojsa Nakicenovic, scholars at the International Institute for Applied Systems Analysis (IIASA), the world has only made two “grand” energy transitions: one from biomass to coal between 1850 and 1920, and a second from coal to oil and gas between 1920 and today. The first transition lasted roughly 70 years; the second has now lasted approximately 90 years.

A similar theme is seen in the electric generating industry. In the 130 years or so since central generating stations and the electric lightbulb were first established, only a handful of basic electric generating technologies have become commercially widespread. By far the most common of these is the thermal power station, which uses energy from either the combustion of fossil fuels (coal, oil, and gas) or a nuclear reactor to operate a steam turbine, which in turn powers an electric generator.

The conditions that made energy system transitions slow in the past still exist today. Even without political gridlock, it could well take many decades to decarbonize the global energy sector, a period of time that would produce much higher atmospheric concentrations of CO2 and ever-growing greater risks to society. This points to the importance of beginning the long transition to decarbonize the economy as soon as possible.

Policy implications

Given the uncertainties in climate projection, innovation, and technology deployment, developing a broad range of technology options can be a hedge against climate risk.

Technology “options” (as the term is used here) include carbon-free technologies that are relatively costly or not fully demonstrated but with innovation through fundamental and applied RD&D might become sufficiently reliable, affordable, and scalable to be widely deployed if and when policymakers determine they are needed. (They are not to be confused with other technologies, such as controls for non-CO2 GHGs such as methane and niche EOR applications of fossil CCS, which have already been commercialized.)

A technology option is analogous to a financial option. The investment to create the technology is akin to the cost of buying the financial option; it gives the owner the right but not the obligation to engage in a later transaction.

Examples of carbon-free generation options include small modular nuclear reactors (SMRs) or advanced Generation IV nuclear reactor technologies such as sodium or gas-cooled fast reactors; advanced CCS technologies for both coal and natural gas plants; underground coal gasification with CCS (UCG/CCS); and advanced renewable technologies. Developing options on such technologies (assuming innovation success) would reduce the cost premium of decarbonization, the time required to decarbonize the global economy, and the risks and costs of quickly scaling up technologies that are not yet fully proven.

In contrast to carbon-free generation, climate remediation options could directly remove carbon from the atmosphere or mitigate some of its worst effects. Examples include atmospheric carbon removal technologies (such as air capture and sequestration, regional or continental afforestation, and ocean iron fertilization) and solar radiation management technologies (such as stratospheric aerosol injection and cloud-whitening systems.) Because these technologies have the potential to reduce atmospheric concentrations or global average temperatures, they could (if proven) reduce, reverse, or prevent some of the worst impacts of climate change if atmospheric concentrations rise to unacceptably high levels. The challenge with this category of technologies will be to reduce the cost and increase the scale of application while avoiding unintended environmental and ecosystem harms that would offset the benefits they create.

Again, investing now in the development of such technology options would not create an obligation to deploy them, but it would yield reliable performance and cost data for future policymakers to consider in determining how to most effectively and efficiently address the climate issue. That is the essence of an iterative risk management process. Such a portfolio approach would also position the country to benefit economically from the growing overseas markets for carbon-free generation and other low-carbon technologies. It also addresses the political and economic polarization around various energy options, with some ideologies and interests focused on renewables, others on nuclear energy, and still others on CCS. A portfolio approach not only hedges against future climate uncertainties but also offers expanded opportunities for political inclusiveness and economic benefit. Over a period of time, investments in new and expanded RD&D programs would lead to new intellectual property that could help grow investments, design, manufacturing, employment, sales, and exports to serve overseas and perhaps domestic markets.

Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency-Energy (ARPA-E), the scope of technologies is far too constrained.

This portfolio approach would be a significant departure from current innovation and deployment policies. Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency–Energy (ARPA-E), the scope of technologies is far too constrained. For instance, despite its importance, a fully funded program to demonstrate multiple commercial-scale post-combustion CCS systems for both coal and natural gas generating technologies has yet to be established. Similarly, efforts to develop advanced nuclear reactor designs are limited, and there is almost no government support for climate remediation technologies. Renewable energy can make a large contribution, but numerous studies have demonstrated that it will probably be much more difficult and costly to decarbonize our electricity system within the next half century without CCS and nuclear power.

Our approach, in contrast, would involve a broader mix of technologies and innovation programs including the fossil, advanced nuclear, advanced renewable, and climate remediation technologies to maximize our chances of creating proven, scalable, and economic technologies for deployment.

The specific deployment policies needed would depend in part on the choice of technologies and the status of their development, but they would probably encompass an expanded suite of programs across the RD&D-to-commercialization continuum, including fundamental and applied R&D programs, incentives, and other means to support pilot and demonstration programs, government procurement programs, and joint international technology development and transfer efforts.

The innovation processes used by the federal government also warrant assessment and possible reform. A number of important recent studies and reports have critiqued past and current policies and put forward recommendations to accelerate innovation. Of particular note are recommendations to provide greater support for demonstration projects, expand ARPA-E, create new institutions (such as a Clean Energy Deployment Administration, a Green Bank, an Energy Technology Corporation, Public-Private Partnerships, or Regional Innovation Investment Boards), and promote competition between government agencies such as DOE and the Department of Defense. All of these deserve further attention.

Of course there will never be enough money to do everything. That’s why a strategic approach is essential. The portfolio should focus on strategically important technologies with the potential to make a material difference, based on analytical criteria such as:

  • The likelihood of becoming “proven.” Many if not most of the technologies that are likely to be considered options have not yet been proven to be reliable technologies at reasonable cost. Consequently, assessing this prospect, along with a time frame for full development and deployment, would obviously be an important decision criterion. This would not preclude “long-shot” technologies; rather it would ensure that their prospects for success be weighed with other criteria.
  • Ability to reach multi-terawatt scale. Some projections of energy demand suggest that complete decarbonization of the energy system could require 30 terawatts of carbon-free power by mid-century, given current growth patterns.
  • Relevance to Asia and the developing world. Because most of the growth in the developing world will be concentrated in large dense cities, distributed energy sources or those requiring large amounts of land area may have less relevance.
  • Ability to generate firm and dispatchable power. Electrical demands vary widely over time, often fluctuating by a factor of 2 over the course of a single day. Because electricity needs to be generated in a reliable fashion in response to demand, intermittent resources could have less relevance under conditions of deep decarbonization, unless their electrical output can be converted into a firm resource through grid-scale energy storage systems.
  • Potential to reduce costs within a reasonable range of conventional technologies. The less expensive a zero-carbon energy source is and the closer it can be managed down to cost parity with conventional resources such as gas and coal, the more likely it is that it will be rapidly adopted at scale.
  • Private-sector investment. If the private sector is adequately investing in the development or demonstration of a given technology, there would be no need for duplicative government support.
  • Potential to advance U.S. competitiveness. Investments should be sensitive to areas of energy innovation where the United States is well positioned to be a global leader.

To illustrate this further, programs might include the following.

  1. A program to demonstrate multiple CCS technologies, including post-combustion coal, pre-combustion coal, and natural gas combined-cycle technologies at full commercial scale.
  2. A program to develop advanced nuclear reactor designs, including a federal RD&D program capable of addressing each of the fundamental concerns about nuclear power. Particular attention should be given to the potential for small modular reactors (SMRs) and advanced, non–light-water reactors. A key complement to such a program would be the review and, if necessary, reform of Nuclear Regulatory Commission expertise and capabilities to review and license advanced reactor designs.
  3. Augmentation of the Department of Defense’s capabilities to sponsor development, demonstration, and scale-up of advanced energy technology projects that contribute to the military’s national security mission, such as energy security for permanent bases and energy independence for forward bases in war zones.
  4. Continued expansion of international technology innovation programs and transfer of insights from overseas manufacturing processes that have resulted in large capital cost reductions for the United States. In recent years, a number of government-to-government and business–to–nongovernmental organization partnerships have been established to facilitate such technology innovation and transfer efforts.
  5. Consideration of the use of a competitive procurement model, in which government provides funding opportunities for private-sector partners to demonstrate and deploy selective technologies that lack a current market rationale to be commercialized.

Note that this is not intended to be an exhaustive list of the efforts that could be considered, but there should be consideration of new models of public-private cooperation in technology development.

The technology options approach outlined in this paper, with its emphasis on research, development, demonstration, and innovation, serves a different albeit overlapping purpose from deployment programs such as technology portfolio standards, carbon-pricing policies, and feed-in tariffs. The options approach focuses primarily on developing improved and new technologies, whereas deployment programs focus primarily on commercializing proven technologies.

RD&D and deployment policies are generally recognized as being complementary; both would be needed to fully decarbonize the economy unless carbon mitigation was in some way highly valued in the marketplace. In practice, at least to date, technology deployment programs have not successfully commercialized carbon-free technologies in a widespread, cost-effective manner, or offered incentives to continue to innovate and improve the technology. New approaches including the use of market-based pricing mechanisms such as reverse auctions and other competitive procurement methods are likely to be more flexible, economically efficient, and programmatically effective.

Yet deploying new carbon-free technologies on a wide-spread basis over an extended period of time will be a policy challenge until the cost premium has been reduced to a level at which the tradeoffs between short-term certain costs, and long-term uncertain benefits are acceptable to the public. Until then, new deployment programs will be difficult to establish, and if they are established, they are likely to have little material impact (because efforts to constrain program costs would lead these programs to have very limited scopes) or be quickly terminated (due to high program costs), as we have seen with, for example, the U.S. Synthetic Fuels Corporation. Therefore, substantially reducing the cost premium for carbon-free energy must be a priority for both innovation and deployment programs. It is likely to be the fastest and most practical path to create a realistic opportunity to rapidly decarbonize the economy.

Although we are not proposing a specific or complete set of programs in this paper, it is fair to say that our policy approach would involve a substantial increase in energy RD&D spending—an effort that could cost between $15 billion and $25 billion per year, a three- to fivefold increase over recent energy RD&D spending levels.

This is a significant increase over historic levels but modest compared to current funding for medical research (approximately $30 billion per year) and military research (approximately $80 billion per year), in line with previous R&D initiatives over the years (such as the War on Terror, the NIH buildup in the early 2000s, and the Apollo space program), and similar to other recent energy innovation proposals.

The increase in funding would need to be paid for, requiring redirection of existing subsidies, funding a clean energy trust from federal revenues accruing from expanded oil and gas production, a modest “wires charge” on electricity rate payers, or reallocations as part of a larger tax reform effort. We are not suggesting that this would necessarily be easy, only that such investments are necessary and are not out of line with other innovation investment strategies that the nation has adopted, usually with bipartisan support. In this light, we emphasize again the political virtues of a portfolio approach that keeps technological options open and offers additional possible benefits from the potential for enhanced economic competitiveness.

In light of the uncertain but clear risk of severe climate impacts, prudence calls for undertaking some form of risk management. The minimum 50-year time period that will be required to decarbonize the global economy and the effectively irreversible nature of any climate impacts argue for undertaking that effort as soon as reasonably possible. Yet pragmatism requires us to recognize that most of the technologies needed to manage this risk are either substantially more expensive than conventional alternatives or are as yet unproven.

These uncertainties and challenges need not be confounding obstacles to action. Instead, they can be addressed in a sensible way by adopting the broad “portfolio of technology options” approach outlined in this paper; that is, by developing a diverse array of proven technologies (including carbon capture, advanced nuclear, advanced renewable, atmospheric carbon removal, and solar radiation management) and deploying the most successful ones if and when policymakers determine they are needed.

This approach would provide policymakers with greater flexibility to establish policies deploying proven, scalable, and economical technologies. And by placing greater emphasis on reducing the cost of scalable carbon-free technologies, it would allow these technologies to be deployed more quickly, broadly, and cost-effectively than would otherwise be possible. At the same time, it would not be a commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomical.

We believe that this pragmatic portfolio approach should appeal to thoughtful people across the political spectrum, but most notably to conservatives who have been skeptical of an “all-in” approach to climate that fails to acknowledge the uncertainties of both policymaking and climate change. It is at least worth testing whether such an approach might be able to break our current counterproductive deadlock.

David Garman, a principal and managing partner at Decker Garman Sullivan LLC, served as undersecretary in the Department of Energy in the George W. Bush administration. Kerry Emanuel is the Cecil and Ida Green Professor of atmospheric science at the Massachusetts Institute of Technology and codirector of MIT’s Lorenz Center, a climate think tank devoted to basic curiosity-driven climate research. Bruce Phillips is a director of The NorthBridge Group, an economic and strategic consulting firm.

Books

What’s My (Cell) Line?

Cloning Wildlife: Zoos, Captivity, and the Future of Endangered Animals

by Carrie Friese. New York: New York University Press, 2013, 258 pp.

Stewart Brand

What a strange and useful book this is!

It looks like much ado about not much—just three experiments conducted at zoos on cross-species cloning (in banteng, gaur, and African wild-cat). Yet the much-ado is warranted, given the rapid arrival of biotech tools and techniques that may revolutionize conservation with the prospect of precisely targeted genetic rescue for endangered and even extinct species. Carrie Friese’s research was completed before “de-extinction” was declared plausible in 2013, but her analysis applies directly.

First, a note: readers of this review should be aware of two perspectives at work. Friese writes as a sociologist, so expect occasional sentences such as, “Cloned animals are not objects here…. They are ‘figures’ in [Donna] Haraway’s sense of the word, in that they embody ‘material-semiotic nodes or knots in which diverse bodies and meanings coshape one another.’” I write as a proponent of high-tech genetic rescue, being a co-founder of Revive & Restore, a small nonprofit pushing ahead with de-extinction for woolly mammoths and passenger pigeons and with genetic assistance for potentially inbred black-footed ferrets. I’m also the author of a book on ecopragmatism, called Whole Earth Discipline, that Friese quotes approvingly.

Friese is a sharp-eyed researcher. She begins by noting with interest that “in direct contradiction to public enthusiasm surrounding endangered animal cloning, many people in zoos have been rather ambivalent about such technological developments.” Dissecting ambivalence is her joy, I think, because she detects in it revealing indicators of deep debate and the hidden processes by which professions change their mind fundamentally, driven by technological innovation.

The innovation in this case concerns the ability, new in this century, of going beyond same-species cloning (such as with Dolly the sheep) to cross-species cloning. An egg from one species, such as a domestic cow, has its nucleus removed and replaced with the nucleus and nuclear DNA of an endangered species, such as the Javan banteng, a type of wild cow found in Southeast Asia. The egg is grown in vitro to an early-stage embryo and then implanted in the uterus of a cow. When all goes well (it sometimes doesn’t), the pregnancy goes to term, and a new Javan banteng is born. In the case of the banteng, its DNA was drawn from tissue cryopreserved 25 years earlier by San Diego’s Frozen Zoo, in the hope that it could help restore genetic variability to the remaining population of bantengs assumed to be suffering from progressive inbreeding. (At Revive & Restore we are doing something similar with black-footed ferret DNA from the Frozen Zoo.)

91

Now comes the ambivalence. The cloned “banteng” may have the nuclear DNA of a banteng, but its mitochondrial DNA (a lesser but still critical genetic component found outside of the nucleus and passed on only maternally) comes from the egg of a cow. Does that matter? It sure does to zoos, which see their task as maintaining genetically pure species. Zoos treat cloned males, which can pass along only nuclear DNA to future generations, as valuable “bridges” of pure banteng DNA to the banteng gene pool. But cloned female bantengs, with their baggage of cow mitochondrial DNA ready to be passed to their offspring, are deemed valueless hybrids.

Friese describes this view as “genetic essentialism.” It is a byproduct of the “conservation turn” that zoos took in the 1970s. In this shift, zoos replaced their old cages with immersion displays of a variety of animals looking somewhat as if they were in the wild, and they also took on a newly assumed role as repositories of wildlife gene pools to supplant or enrich, if necessary, populations that are threatened in the wild. (The conservation turn not only saved zoos; it pushed them to new levels of popularity. In the United States, 100 million people a year now visit zoos, wildlife parks, and aquariums.)

But in the 1980s some conservation biologists began moving away from focusing just on species to an expanded concern about whole ecosystems and thus about ecological function. They became somewhat relaxed about species purity. When peregrine falcons died out along the East Coast of the United States, conservationists replaced them with hybrid falcons from elsewhere, and the birds thrived. Inbred Florida panthers were saved with an infusion of DNA from Texas cougars. Coyotes, on their travels from west to east, have been picking up wolf genes, and the wolves have been hybridizing with dogs.

As the costs of DNA sequencing keep coming down, field biologists have been discovering that hybridization is rampant in nature and indeed may be one of the principle mechanisms of evolution, which is said to be speeding up in these turbulent decades. Friese notes that “as an institution, the zoo is particularly concerned with patrolling the boundaries between nature and culture.” Defending against cloned hybridization, they think, is defending nature from culture. But if hybridization is common in nature, then what?

Soon enough, zoos will be confronting the temptation of de-extincted woolly mammoths (and passenger pigeons, great auks, and Carolina parakeets, among others). Those thrilling animals could be huge draws, deeply educational, exemplars of new possibilities for conservation. They will also be, to a varying extent, genomic hybrids—mammoths that are partly Asian elephant, passenger pigeons that are partly band-tailed pigeon, great auks that are partly razorbill, Carolina parakeets that are partly sun parakeet. Should we applaud or turn away in dismay? I think that conservation biologists will look for one primary measure of success: Can the revived animals take up their old ecological role and manage on their own in the wild? If not, they are freaks. If they succeed, welcome back.

Friese has written a valuable chronicle of the interaction of wildlife conservation, zoos, and biotech in the first decade of this century. It is a story whose developments are likely to keep surprising us for at least the rest of this century, and she loves that. Her book ends: “Humans should learn to respond well to the surprises that cloned animals create.”

Stewart Brand (sb@longnow.org) is the president of the Long Now Foundation in Sausalito, California.

Climate perceptions

Reason in a Dark Time: Why the Struggle against Climate Change Failed—and What It Means for Our Future

by Dale Jamieson. Oxford University Press, New York, 260 pp.

Elizabeth L. Malone

Did climate change cause Hurricanes Katrina and Sandy? Does a cold, snowy winter disprove climate change? As Dale Jamieson says in Reason in a Dark Time, “These are bad questions and no answer can be given that is not misleading. It is like asking whether when a baseball player gets a base hit, it is caused by his .350 batting average. One cannot say ‘yes,’ but saying ‘no’ falsely suggests that there is no relationship between his batting average and the base hit.” Analogies such as this are a major strength of this book, which both distills and extends the thoughtful analysis that Jamieson has been providing for well over two decades.

I’ve been following Jamieson’s work since the early 1990s, when a group at Pacific Northwest National Laboratory began to assess the social science literature relevant to climate change. Few scholars outside the physical sciences had addressed climate change explicitly; Jamieson, a philosopher, had. His publications on ethics, moral issues, uncertainty, and public policy laid down important arguments captured in Human Choice and Climate Change, which I co-edited with Steve Rayner in 1998. And the arguments are still current and vitally important as society contemplates the failure of all first-best solutions regarding climate change: an effective global agreement to reduce greenhouse gas emissions, vigorous national policies, adequate transfers of technology and other resources from industrialized to less-industrialized countries, and economic efficiency, among others.

In Reason in a Dark Time, Jamieson works steadfastly through the issues. He lays out the larger picture with energy and clarity. He takes us back to the beginning, with the history of scientific discoveries about the greenhouse effect and its emergence as a policy concern through the 1992 Earth Summit’s spirit of high hopefulness and the gradual unraveling of those high hopes by the time of the 2009 Copenhagen Climate Change Conference. He discusses obstacles to action, from scientific ignorance to organized denial to the limitations of our perceptions and abilities in responding to “the hardest problem.” He details two prominent but inadequate approaches to both characterizing the problem of climate change and prescribing solutions: economics and ethics. And finally, he discusses doable and appropriate responses in this “dark world” that has so far failed to agree on and implement effective actions that adequately reflect the scope of the problem.

Well, you may say, we’ve seen this book before. There are lots of books (and articles, both scholarly and mainstream) that give the history, discuss obstacles, criticize the ways the world has been trying to deal with climate change, and give recommendations. And indeed, Jamieson himself draws on his own lengthy publication record.

But you should read this book for its insights. If you are already knowledgeable about the history of climate science and international negotiations, you might skim this discussion. (It’s a good history, though.) All readers will gain from examining the useful and clear distinctions that Jamieson draws regarding climate skepticism, contrarianism, and denialism. Put simply, he sees that “healthy skepticism” questions evidence and views while not denying them; contrarianism may assert outlandish views but is skeptical of all views, including its own outlandish assertions; and denialism quite simply rejects a widely believed and well-supported claim and tries to explain away the evidence for the claim on the basis of conspiracy, deceit, or some rhetorical appeal to “junk science.” And take a look at the table and related text that depict a useful typology of eight frames of science-related issues that relate to climate change: social progress, economic development and competitiveness, morality and ethics, scientific and technical uncertainty, Pandora’s box/ Frankenstein’s monster/runaway science, public accountability and governance, middle way/alternative path, and conflict and strategy.

93

Jamieson’s discussions of the “limits of economics” and the “frontiers of ethics” are also useful. Though they tread much-traveled ground, they take a slightly different slant, starting not with the forecast but the reality of climate change. For instance, the discount rate (how economics values costs in the future) has been the subject of endless critiques, but typically with the goal of coming up with the “right” rate. But Jamieson points out that this is a fruitless endeavor, as social values underlie arguments for almost any discount rate. Thus, the discount rate (and other economic tools) is simply inadequate and, moreover, a mere standin for the real discussion about how society should plan for the future.

Similarly, his discussion of ethics points out that “commonsense morality” cannot “provide ethical guidance with some important aspects of climate-changing behavior”—so it’s not surprising that society has failed to act on climate change. The basis for action is not a matter of choosing appropriate values from some eternal ethical and moral menu, but of evolving values that will be relevant to a climate-changed world in which we make choices about how to adapt to climate change and whether to prevent further climate change—oh, and about whether or not to dabble in planet-altering geoengineering. Ethical and moral revolutions have occurred (e.g., capitalism’s elevation of selfishness), and climate ethicists are breaking new ground in connecting and moralizing about emissions-producing activities and climate change.

Although Jamieson’s explorations do not provide an antidote to the gloom of our dark time, readers will find much to think about here.

He clearly rebuts the argument, for example, that individual actions do not matter, asserting that “What we do matters because of its effects on the world, but what we do also matters because of its effects on ourselves.” Expanding on this thought, he says: “In my view we find meaning in our lives in the context of our relationships to humans, other animals, the rest of nature, and the world generally. This involves balancing such goods as self-expression, responsibility to others, joyfulness, commitment, attunement to reality and openness to new (often revelatory) experiences. What this comes to in the conduct of daily life is the priority of process over product, the journey over the destination, and the doing over what is done.” To my mind, this sounds like the good life that includes respect for nature, temperance, mindfulness, and cooperativeness.

Ultimately, Jamieson turns to politics and policy. As the terms prevention, mitigation, adaptation, and geoengineering have become fuzzy at best, he proposes a new classification of responses to climate change: adaptation (to reduce the negative effects of climate change), abatement (to reduce greenhouse gas emissions), mitigation (to reduce concentrations of greenhouse gases in the atmosphere), and solar radiation management (to alter the Earth’s energy balance). I agree with Jamieson that we need all of the first three and also that we need to be very cautious about “the category formerly known as geoengineering.”

Most of all, we need to live in the world as is, with all its diversity of motives and potential actions, not the dream world imagined at the Earth Summit held in 1992 in Rio de Janeiro. Jamieson gives us seven practical priorities for action (yes, they’ve been said before, but not often in the real-world context that he sketches). And he offers three guiding principles (my favorite is “stop arguing about what is optimal and instead focus on doing what is good,” with “good” encompassing both practical and ethical elements).

I do have some quarrels with the book, starting with the title. In its fullest form, it is unnecessarily wordy and gloomy. And as Jamieson does not talk much of “reason” in the book (nor is there even a definition of the contested term that I could find), why is it displayed so prominently?

More substantively, the gloom that Jamieson portrays is sometimes reinforced by statements that seem almost apocalyptic, such as, “While once particular human societies had the power to upset the natural processes that made their lives and cultures possible, now people have the power to alter the fundamental global conditions that permitted human life to evolve and that continue to sustain it. There is little reason to suppose that our systems of governance are up to the tasks of managing such threats.” But people have historically faced threats (war, disease, overpopulation, the Little Ice Age, among others) that likely seemed to them just as serious, so statements such as Jamieson’s invite the backlash that asserts, well, here we still are and better off, too.

Then there is the question of the intended audience, which Jamieson specifies as “my fellow citizens and…those with whom I have discussed these topics over the years.” But the literature reviews and the heavy use of citations seem to target a narrower academic audience. I would hope that people involved in policymaking and other decisionmaking would not be put off by the academic trappings, but I have my doubts.

If the book finds a wide audience, our global conversation about climate change could become more fruitful. Those who do read it will be rewarded with much to think about in the insights, analogies, and accessible discussions of productive pathways into the climate-changed future.

Elizabeth L. Malone is a staff scientist at the Joint Climate Change Research Institute, a project sponsored by Pacific Northwest National Laboratory and the University of Maryland.

Final Frontier vs. Fruitful Frontier: The Case for Increasing Ocean Exploration

AMITAI ETZIONI

Possible solutions to the world’s energy, food, environmental, and other problems are far more likely to be found in nearby oceans than in distant space.

Every year, the federal budget process begins with a White House-issued budget request, which lays out spending priorities for federal programs. From this moment forward, President Obama and his successors should use this opportunity to correct a longstanding misalignment of federal research priorities: excessive spending on space exploration and neglect of ocean studies. The nation should begin transforming the National Oceanic and Atmospheric Administration (NOAA) into a greatly reconstructed, independent, and effective federal agency. In the present fiscal climate of zero-sum budgeting, the additional funding necessary for this agency should be taken from the National Aeronautics and Space Administration (NASA).

The basic reason is that deep space—NASA’s favorite turf—is a distant, hostile, and barren place, the study of which yields few major discoveries and an abundance of overhyped claims. By contrast, the oceans are nearby, and their study is a potential source of discoveries that could prove helpful for addressing a wide range of national concerns from climate change to disease; for reducing energy, mineral, and potable water shortages; for strengthening industry, security, and defenses against natural disasters such as hurricanes and tsunamis; for increasing our knowledge about geological history; and much more. Nevertheless, the funding allocated for NASA in the Consolidated and Further Continuing Appropriations Act for FY 2013 was 3.5 times higher than that allocated for NOAA. Whatever can be said on behalf of a trip to Mars or recent aspirations to revisit the Moon, the same holds many times over for exploring the oceans; some illustrative examples follow. (I stand by my record: In The Moondoggle, published in 1964, I predicted that there was less to be gained in deep space than in near space—the sphere in which communication, navigations, weather, and reconnaissance satellites orbit—and argued for unmanned exploration vehicles and for investment on our planet instead of the Moon.)

Climate

There is wide consensus in the international scientific community that the Earth is warming; that the net effects of this warming are highly negative; and that the main cause of this warming is human actions, among which carbon dioxide emissions play a key role. Hence, curbing these CO2 emissions or mitigating their effects is a major way to avert climate change.

Space exploration advocates are quick to claim that space might solve such problems on Earth. In some ways, they are correct; NASA does make helpful contributions to climate science by way of its monitoring programs, which measure the atmospheric concentrations and emissions of greenhouse gases and a variety of other key variables on the Earth and in the atmosphere. However, there seem to be no viable solutions to climate change that involve space.

By contrast, it is already clear that the oceans offer a plethora of viable solutions to the Earth’s most pressing troubles. For example, scientists have already demonstrated that the oceans serve as a “carbon sink.” The oceans have absorbed almost one-third of anthropogenic CO2 emitted since the advent of the industrial revolution and have the potential to continue absorbing a large share of the CO2 released into the atmosphere. Researchers are exploring a variety of chemical, biological, and physical geoengineering projects to increase the ocean’s capacity to absorb carbon. Additional federal funds should be allotted to determine the feasibility and safety of these projects and then to develop and implement any that are found acceptable.

Iron fertilization or “seeding” of the oceans is perhaps the most well-known of these projects. Just as CO2 is used by plants during photosynthesis, CO2 dissolved in the oceans is absorbed and similarly used by autotrophic algae and other phytoplankton. The process “traps” the carbon in the phytoplankton; when the organism dies, it sinks to the sea floor, sequestering the carbon in the biogenic “ooze” that covers large swaths of the seafloor. However, many areas of the ocean high in the nutrients and sunlight necessary for phytoplankton to thrive lack a mineral vital to the phytoplankton’s survival: iron. Adding iron to the ocean has been shown to trigger phytoplankton blooms, and thus iron fertilization might increase the CO2 that phytoplankton will absorb. Studies note that the location and species of phytoplankton are poorly understood variables that affect the efficiency with which iron fertilization leads to the sequestration of CO2. In other words, the efficiency of iron fertilization could be improved with additional research. Proponents of exploring this option estimate that it could enable us to sequester CO2 at a cost of between $2 and $30/ton—far less than the cost of scrubbing CO2 directly from the air or from power plant smokestacks—$1,000/ton and $50-100/ton, respectively, according to one Stanford study.

Justine Serebrin

Growing up on the Southern California coast, Justine Serebrin spent countless hours snorkeling. From an early age she sensed that the ocean was in trouble as she noticed debris, trash, and decaying marine life consuming the shore. She credits her childhood experiences with influencing her artistic imagination and giving her a feeling of connectedness and lifelong love of the ocean.

Serebrin’s close observations of underwater landscapes inform her paintings, which are based upon what she describes as the “deep power” of the ocean. She has traveled to the beaches of Spain, Mexico, Hawaii, the Caribbean, and the western and eastern coasts of the United States. The variety of creatures, cleanliness, temperature and emotion evoked from each location greatly influence her artwork. She creates the paintings above water, but is exploring the possibility of painting underwater in the future. Her goal with this project is to promote ocean awareness and stewardship.

Serebrin is currently working on The Illuminated Water Project which will enable her to increase the scope and impact of her work. Her paintings have been exhibited at the Masur Museum of Art, Monroe, Louisiana; the New Orleans Museum of Art, Lousiana; and the McNay Museum of Art, San Antonio, Texas. She is a member of the Surfrider Foundation and the Ocean Artists Society. She is the co-founder of The Upper Six Hundreds Artist Collective, comprised of artists, designers, musicians, writers, and many others who are working together to redefine the conventions of the traditional art gallery through an integration of creative practice and community engagement. She holds a BFA from Otis College of Art and Design, Los Angeles. Visit her website at http://www.justineserebrin.com/

Alana Quinn

67

JUSTINE SEREBRIN, Soul of The Sea, Oil on translucent paper, 25 × 40 inches, 2013.

Despite these promising findings, there are a number of challenges that prevent us from using the oceans as a major means of combating climate change. First, ocean “sinks” have already absorbed an enormous amount of CO2. It is not known how much more the oceans can actually absorb, because ocean warming seems to be altering the absorptive capacity of the oceans in unpredictable ways. It is further largely unknown how the oceans interact with the nitrogen cycle and other relevant processes.

Second, the impact of CO2 sequestration on marine ecosystems remains underexplored. The Joint Ocean Commission Initiative, which noted in a 2013 report that absorption of CO2 is “acidifying” the oceans, recommended that “the administration and Congress should take actions to measure and assess the emerging threat of ocean acidification, better understand the complex dynamics causing and exacerbating it, work to determine its impact, and develop mechanisms to address the problem.” The Department of Energy specifically calls for greater “understanding of ocean biogeochemistry” and of the likely impact of carbon injection on ocean acidification. Since the mid-18th century, the acidity of the surface of the ocean, measured by the water’s concentration of hydrogen ions, has increased by 30% on average, with negative consequences for mollusks, other calcifying organisms, and the ecosystems they support, according to the Blue Ribbon Panel on Ocean Acidification. Different ecosystems have also been found to exhibit different levels of pH variance, with certain areas such as the California coastline experiencing higher levels of pH variability than elsewhere. The cost worldwide of mollusk-production losses alone could reach $100 billion if acidification is not countered, says Monica Contestabile, an environmental economist and editor of Nature Climate Change. Much remains to be learned about whether and how carbon sequestration methods like iron fertilization could contribute to ocean acidification; it is, however, clearly a crucial subject of study given the dangers of climate change.

Food

Ocean products, particularly fish, are a major source of food for major parts of the world. People now eat four times as much fish, on average, as they did in 1950. The world’s catch of wild fish reached an all-time high of 86.4 million tons in 1996; although it has since declined, the world’s wild marine catch remained 78.9 million tons in 2011. Fish and mollusks provide an “important source of protein for a billion of the poorest people on Earth, and about three billion people get 15 percent or more of their annual protein from the sea,” says Matthew Huelsenbeck, a marine scientist affiliated with the ocean conservation organization Oceana. Fish can be of enormous value to malnourished people because of its high levels of micronutrients such as Vitamin A, Iron, Zinc, Calcium, and healthy fats.

However, many scientists have raised concerns about the ability of wild fish stocks to survive such exploitation. The Food and Agriculture Organization of the United Nations estimated that 28% of fish stocks were overexploited worldwide and a further 3% were depleted in 2008. Other sources estimate that 30% of global fisheries are overexploited or worse. There have been at least four severe documented fishery collapses—in which an entire region’s population of a fish species is overfished to the point of being incapable of replenishing itself, leading to the species’ virtual disappearance from the area—worldwide since 1960, a report from the International Risk Governance Council found. Moreover, many present methods of fishing cause severe environmental damage; for example, the Economist reported that bottom trawling causes up to 15,400 square miles of “dead zone” daily through hypoxia caused by stirring up phosphorus and other sediments.

There are several potential approaches to dealing with overfishing. One is aquaculture. Marine fish cultivated through aquaculture is reported to cost less than other animal proteins and does not consume limited freshwater sources. Furthermore, aquaculture has been a stable source of food from 1970 to 2006; that is, it consistently expanded and was very rarely subject to unexpected shocks. From 1992 to 2006 alone, aquaculture expanded from 21.2 to 66.8 million tons of product.

68

JUSTINE SEREBRIN, Sanctuary, Oil and watercolor on translucent paper, 25 × 40 inches, 2013.

Although aquaculture is rapidly expanding—more than 60% from 2000 to 2008—and represented more than 40% of global fisheries production in 2006, a number of challenges require attention if aquaculture is to significantly improve worldwide supplies of food. First, scientists have yet to understand the impact of climate change on aquaculture and fishing. Ocean acidification is likely to damage entire ecosystems, and rising temperatures cause marine organisms to migrate away from their original territory or die off entirely. It is important to study the ways that these processes will likely play out and how their effects might be mitigated. Second, there are concerns that aquaculture may harm wild stocks of fish or the ecosystems in which they are raised through overcrowding, excess waste, or disease. This is particularly true where aquaculture is devoted to growing species alien to the region in which they are produced. Third, there are few industry standard operating practices (SOPs) for aquaculture; additional research is needed for developing these SOPs, including types and sources of feed for species cultivated through aquaculture. Finally, in order to produce a stable source of food, researchers must better understand how biodiversity plays a role in preventing the sudden collapse of fisheries and develop best practices for fishing, aquaculture, and reducing bycatch.

On the issue of food, NASA is atypically mum. It does not claim it will feed the world with whatever it finds or plans to grow on Mars, Jupiter, or any other place light years away. The oceans are likely to be of great help.

Energy

NASA and its supporters have long held that its work can help address the Earth’s energy crises. One NASA project calls for developing low-energy nuclear reactors (LENRs) that use weak nuclear force to create energy, but even NASA admits that “we’re still many years away” from large-scale commercial production. Another project envisioned orbiting space-based solar power (SBSP) that would transfer energy wirelessly to Earth. The idea was proposed in the 1960s by then-NASA scientist Peter Glaser and has since been revisited by NASA; from 1995 to 2000, NASA actively investigated the viability of SBSP. Today, the project is no longer actively funded by NASA, and SBSP remains commercially unviable due to the high cost of launching and maintaining satellites and the challenges of wirelessly transmitting energy to Earth.

69

JUSTINE SEREBRIN, Metamorphosis, Oil on translucent paper, 23.5 × 18 inches, 2013.

Marine sources of renewable energy, by contrast, rely on technology that is generally advanced; these technologies deserve additional research to make them fully commercially viable. One possible ocean renewable energy source is wave energy conversion, which uses the up-and-down motion of waves to generate electrical energy. Potentially-useable global wave power is estimated to be two terawatts, the equivalent of about 200 large power stations or about 10% of the entire world’s predicted energy demand for 2020 according to the World Ocean Review. In the United States alone, wave energy is estimated to be capable of supplying fully one-third of the country’s energy needs.

A modern wave energy conversion device was made in the 1970s and was known as the Salter’s Duck; it produced electricity at a whopping cost of almost $1/kWh. Since then, wave energy conversion has become vastly more commercially viable. A report from the Department of Energy in 2009 listed nine different designs in pre-commercial development or already installed as pilot projects around the world. As of 2013, as many as 180 companies are reported to be developing wave or tidal energy technologies; one device, the Anaconda, produces electricity at a cost of $0.24/kWh. The United States Department of Energy and the National Renewable Energy Laboratory jointly maintain a website that tracks the average cost/kWh of various energy sources; on average, ocean energy overall must cost about $0.23/kWh to be profitable. Some projects have been more successful; the prototype LIMPET wave energy conversion technology currently operating on the coast of Scotland produces wave energy at the price of $0.07/kWh. For comparison, the average consumer in the United States paid $0.12/kWh in 2011. Additional research could further reduce the costs.

Other options in earlier stages of development include using turbines to capture the energy of ocean currents. The technology is similar to that used by wind energy; water moving through a stationary turbine turns the blades, generating electricity. However, because water is so much denser than air, “for the same surface area, water moving 12 miles per hour exerts the same amount of force as a constant 110 mph wind,” says the Bureau of Ocean Energy Management (BOEM), a division of the Department of the Interior. (Another estimate from a separate BOEM report holds that a 3.5 mph current “has the kinetic energy of winds in excess of [100 mph].”) BOEM further estimates that total worldwide power potential from currents is five terawatts—about a quarter of predicted global energy demand for 2020—and that “capturing just 1/1,000th of the available energy from the Gulf Stream…would supply Florida with 35% of its electrical needs.”

Although these technologies are promising, additional research is needed not only for further development but also to adapt them to regional differences. For instance, ocean wave conversion technology is suitable only in locations in which the waves are of the same sort for which existing technologies were developed and in locations where the waves also generate enough energy to make the endeavor profitable. One study shows that thermohaline circulation—ocean circulation driven by variations in temperature and salinity—varies from area to area, and climate change is likely to alter thermohaline circulation in the future in ways that could affect the use of energy generators that rely on ocean currents. Additional research would help scientists understand how to adapt energy technologies for use in specific environments and how to avoid the potential environmental consequences of their use.

Renewable energy resources are the ocean’s particularly attractive energy product; they contribute much less than coal or natural gas to anthropogenic greenhouse gas emissions. However, it is worth noting that the oceans do hold vast reserves of untapped hydrocarbon fuels. Deep-sea drilling technologies remain immature; although it is possible to use oil rigs in waters of 8,000 to 9,000 feet, greater depths require the use of specially-designed drilling ships that still face significant challenges. Deep-water drilling that takes place in depths of more than 500 feet is the next big frontier for oil and natural-gas production, projected to expand offshore oil production by 18% by 2020. One should expect the development of new technologies that would enable drilling petroleum and natural gas at even greater depths than presently possible and under layers of salt and other barriers.

In addition to developing these technologies, entire other lines of research are needed to either mitigate the side effects of large-scale usage of these technologies or to guarantee that these effects are small. Although it has recently become possible to drill beneath Arctic ice, the technologies are largely untested. Environmentalists fear that ocean turbines could harm fish or marine mammals, and it is feared that wave conversion technologies would disturb ocean floor sediments, impede migration of ocean animals, prevent waves from clearing debris, or harm animals. Demand has pushed countries to develop technologies to drill for oil beneath ice or in the deep sea without much regard for the safety or environmental concerns associated with oil spills. At present, there is no developed method for cleaning up oil spills in the Arctic, a serious problem that requires additional research if Arctic drilling is to commence on a larger scale.

More ocean potential

When large quantities of public funds are invested in a particular research and development project, particularly when the payoff is far from assured, it is common for those responsible for the project to draw attention to the additional benefits—“spinoffs”—generated by the project as a means of adding to its allure. This is particularly true if the project can be shown to improve human health. Thus, NASA has claimed that its space exploration “benefit[ted] pharmaceutical drug development” and assisted in developing a new type of sensor “that provides real-time image recognition capabilities,” that it developed an optics technology in the 1970s that now is used to screen children for vision problems, and that a type of software developed for vibration analysis on the Space Shuttle is now used to “diagnose medical issues.” Similarly, opportunities to identify the “components of the organisms that facilitate increased virulence in space” could in theory—NASA claims—be used on Earth to “pinpoint targets for anti-microbial therapeutics.”

Ocean research, as modest as it is, has already yielded several medical “spinoffs.” The discovery of one species of Japanese black sponge, which produces a substance that successfully blocks division of tumorous cells, led researchers to develop a late-stage breast cancer drug. An expedition near the Bahamas led to the discovery of a bacterium that produces substances that are in the process of being synthesized as antibiotics and anticancer compounds. In addition to the aforementioned cancer fighting compounds, chemicals that combat neuropathic pain, treat asthma and inflammation, and reduce skin irritation have been isolated from marine organisms. One Arctic Sea organism alone produced three antibiotics. Although none of the three ultimately proved pharmaceutically significant, current concerns that strains of bacteria are developing resistance to the “antibiotics of last resort” is a strong reason to increase funding for bioprospecting. Additionally, the blood cells of horseshoe crabs contain a chemical—which is found nowhere else in nature and so far has yet to be synthesized—that can detect bacterial contamination in pharmaceuticals and on the surfaces of surgical implants. Some research indicates that between 10 and 30 percent of horseshoe crabs that have been bled die, and that those that survive are less likely to mate. It would serve for research to indicate the ways these creatures can be better protected. Up to two-thirds of all marine life remains unidentified, with 226,000 eukaryotic species already identified and more than 2,000 species discovered every year, according to Ward Appeltans, a marine biologist at the Intergovernmental Oceanographic Commission of UNESCO.

Contrast these discoveries of new species in the oceans with the frequent claims that space exploration will lead to the discovery of extraterrestrial life. For example, in 2010 NASA announced that it had made discoveries on Mars “that [would] impact the search for evidence of extraterrestrial life” but ultimately admitted that they had “no definitive detection of Martian organics.” The discovery that prompted the initial press release—that NASA had discovered a possible arsenic pathway in metabolism and that thus life was theoretically possible under conditions different than those on Earth—was then thoroughly rebutted by a panel of NASA-selected experts. The comparison with ocean science is especially stark when one considers that oceanographers have already discovered real organisms that rely on chemosyn-thesis—the process of making glucose from water and carbon dioxide by using the energy stored in chemical bonds of inorganic compounds—living near deep sea vents at the bottom of the oceans.

The same is true of the search for mineral resources. NASA talks about the potential for asteroid mining, but it will be far easier to find and recover minerals suspended in ocean waters or beneath the ocean floor. Indeed, resources beneath the ocean floor are already being commercially exploited, whereas there is not a near-term likelihood of commercial asteroid mining.

71

JUSTINE SEREBRIN, Jellyfish Love, Oil on translucent paper, 11 × 14 inches, 2013.

72

JUSTINE SEREBRIN, Scarab, Digital painting, 40 × 25 inches, 2013.

Another major justification cited by advocates for the pricey missions to Mars and beyond is that “we don’t know” enough about the other planets and the universe in which we live. However, the same can be said of the deep oceans. Actually, we know much more about the Moon and even about Mars than we know about the oceans. Maps of the Moon are already strikingly accurate, and even amateur hobbyists have crafted highly detailed pictures of the Moon—minus the “dark side”—as one set of documents from University College London’s archives seems to demonstrate. By 1967, maps and globes depicting the complete lunar surface were produced. By contrast, about 90% of the world’s oceans had not yet been mapped as of 2005. Furthermore, for years scientists have been fascinated by noises originating at the bottom of the ocean, known creatively as “the Bloop” and “Julia,” among others. And the world’s largest known “waterfall” can be found entirely underwater between Greenland and Iceland, where cold, dense Arctic water from the Greenland Sea drops more than 11,500 feet before reaching the seafloor of the Denmark Strait. Much remains poorly understood about these phenomena, their relevance to the surrounding ecosystem, and the ways in which climate change will affect their continued existence.

In short, there is much that humans have yet to understand about the depths of the oceans, further research into which could yield important insights about Earth’s geological history and the evolution of humans and society. Addressing these questions surpasses the importance of another Mars rover or a space observatory designed to answer highly specific questions of importance mainly to a few dedicated astrophysicists, planetary scientists, and select colleagues.

Leave the people at home

NASA has long favored human exploration, despite the fact that robots have become much more technologically advanced and that their (one-way) travel poses much lower costs and next to no risks compared to human missions. Still, the promotion of human missions continues; in December 2013, NASA announced that it would grow basil, turnips, and Arabidopsis on the Moon to “show that crop plants that ultimately will feed astronauts and moon colonists and all, are also able to grow on the moon.” However, Martin Rees, a professor of cosmology and astrophysics at Cambridge University and a former president of the Royal Society, calls human spaceflight a “waste of money,” pointing out that “the practical case [for human spaceflight] gets weaker and weaker with every advance in robotics and miniaturisation.” Another observer notes that “it is in fact a universal principle of space science—a ‘prime directive,’ as it were—that anything a human being does up there could be done by unmanned machinery for one-thousandth the cost.” The cost of sending humans to Mars is estimated at more than $150 billion. The preference for human missions persists nonetheless, primarily because NASA believes that human spaceflight is more impressive and will garner more public support and taxpayer dollars, despite the fact that most of NASA’s scientific yield to date, Rees shows, has come from the Hubble Space Telescope, the Chandra X-Ray Observatory, the Kepler space observatory, space rovers, and other missions. NASA relentlessly hypes the bravery of the astronauts and the pioneering aspirations of all humanity despite a lack of evidence that these missions engender any more than a brief high for some.

Ocean exploration faces similar temptations. There have been some calls for “aquanauts,” who would explore the ocean much as astronauts explore space, and for the prioritization of human exploration missions. However, relying largely robots and remote-controlled submersibles seems much more economical, nearly as effective at investigating the oceans’ biodiversity, chemistry, and seafloor topography, and endlessly safer than human agents. In short, it is no more reasonable to send aquanauts to explore the seafloor than it is to send astronauts to explore the surface of Mars.

Several space enthusiasts are seriously talking about creating human colonies on the Moon or, eventually, on Mars. In the 1970s, for example, NASA’s Ames Research Center spent tax dollars to design several models of space colonies meant to hold 10,000 people each. Other advocates have suggested that it might be possible to “terra-form” the surface of Mars or other planets to resemble that of Earth by altering the atmospheric conditions, warming the planet, and activating a water cycle. Other space advocates envision using space elevators to ferry large numbers of people and supplies into space in the event of a catastrophic asteroid hitting the Earth. Ocean enthusiasts dream of underwater cities to deal with overpopulation and “natural or man-made disasters that render land-based human life impossible.” The Seasteading Institute, Crescent Hydropolis Resorts, and the League of New Worlds have developed pilot projects to explore the prospect of housing people and scientists under the surface of the ocean. However, these projects are prohibitively expensive and “you can never sever [the surface-water connection] completely,” says Dennis Chamberland, director of one of the groups. NOAA also invested funding in a habitat called Aquarius built in 1986 by the Navy, although it has since abandoned this project.

If anyone wants to use their private funds for such outlier projects, they surely should be free to proceed. However, for public funds, priorities must be set. Much greater emphasis must be placed on preventing global calamities rather than on developing improbable means of housing and saving a few hundred or thousand people by sending them far into space or deep beneath the waves.

Reimagining NOAA

These select illustrative examples should suffice to demonstrate the great promise of intensified ocean research, a heretofore unrealized promise. However, it is far from enough to inject additional funding, which can be taken from NASA if the total federal R&D budget cannot be increased, into ocean science. There must also be an agency with a mandate to envision and lead federal efforts to bolster ocean research and exploration the way that President Kennedy and NASA once led space research and “captured” the Moon.

For those who are interested in elaborate reports on the deficiencies of existing federal agencies’ attempts to coordinate this research, the Joint Ocean Commission Initiative (JOCI)—the foremost ocean policy group in the United States and the product of the Pew Oceans Commission and the United States Commission on Ocean Policy—provides excellent overviews. These studies and others reflect the tug-of-war that exists among various interest groups and social values. Environmentalists and those concerned about global climate change, the destruction of ocean ecosystems, declines in biodiversity, overfishing, and oil spills clash with commercial groups and states more interested in extracting natural resources from the oceans, in harvesting fish, and utilizing the oceans for tourism. (One observer noted that only 1% of the 139.5 million square miles of the ocean is conserved through formal protections, whereas billons use the oceans “as a ‘supermarket and a sewer.’”) And although these reports illuminate some of the challenges that must be surmounted if the government is to institute a broad, well-funded set of ocean research goals, none of these groups have added significant funds to ocean research, nor have they taken steps to provide NASA-like agency to take the lead in federally-supported ocean science.

NOAA is the obvious candidate, but it has been hampered by a lack of central authority and by the existence of many disparate programs, each of which has its own small group of congressional supporters with parochial interests. The result is that NOAA has many supporters of its distinct little segments but too few supporters of its broad mission. Furthermore, Congress micromanages NOAA’s budget, leaving too little flexibility for the agency to coordinate activities and act on its own priorities.

It is hard to imagine the difficulty of pulling these pieces together—let alone consolidating the bewildering number of projects—under the best of circumstances. Several administrators of NOAA have made significant strides in this regard and should be recognized for their work. However, Congress has saddled the agency with more than 100 ocean-related laws that require the agency to promote what are often narrow and competing interests. Moreover, NOAA is buried in the Department of Commerce, which itself is considered to be one of the weaker cabinet agencies. For this reason, some have suggested that it would be prudent to move NOAA into the Department of the Interior—which already includes the United States Geological Service, the Bureau of Ocean Energy Management, the National Park Service, the U.S. Fish and Wildlife Service, and the Bureau of Safety and Environmental Enforcement—to give NOAA more of a backbone.

Moreover, NOAA is not the only federal agency that deals with the oceans. There are presently ocean-relevant programs in more than 20 federal agencies—including NASA. For instance, the ocean exploration program that investigates deep ocean currents by using satellite technology to measure minute differences in elevation on the surface of the ocean is currently controlled by NASA, and much basic ocean science research has historically been supported by the Navy, which lost much of its interest in the subject since the end of the Cold War. (The Navy does continue to fund some ocean research, but at levels much lower than earlier.) Many of these programs should be consolidated into a Department of Ocean Research and Exploration that would have the authority to do what NOAA has been prevented from doing: namely, direct a well-planned and coordinated ocean research program. Although the National Ocean Council’s interagency coordinating structure is a step in the right direction, it would be much more effective to consolidate authority for managing ocean science research under a new independent agency or a reimagined and strengthened NOAA.

Setting priorities for research and exploration is always needed, but this is especially true in the present age of tight budgets. It is clear that oceans are a little-studied but very promising area for much enhanced exploration. By contrast, NASA’s projects, especially those dedicated to further exploring deep space and to manned missions and stellar colonies, can readily be cut. More than moving a few billion dollars from the faraway planets to the nearby oceans is called for, however. The United States needs an agency that can spearhead a major drive to explore the oceans—an agency that has yet to be envisioned and created.

Amitai Etzioni (etzioni@gwu.edu) is University Professor and professor of International Affairs and director of the Institute for Communitarian Policy Studies at George Washington University.

Collective Forgetting: Inside the Smithsonian’s Curatorial Crisis

ALLISON MARSH

WITH LIZZIE WADE

Federal budget cutting is undermining the value of the museums’ invaluable collections by reducing funds for maintenance, cataloging, acquisition, and access.

As the hands on my watch hit 11 o’clock, I was still fighting with the stubborn dust clinging to my chocolate-colored pants. The dust was winning. I knew it was unlikely the senator would actually show up for the tour, but I wanted to look presentable, just in case: for months, I had been writing my speech in my head. Now, maybe I would get to say some of it out loud to someone who might be in a position to help.

Standing in the doorway of the reference room of the Division of Work and Industry, beyond the reach of the general public, on the top floor of the National Museum of American History in Washington, D.C., I mentally rehearsed my pitch. As my watch clicked past 11:05, my welcoming smile started to fade. By 11:10, it was gone completely. Noticing my mounting frustration at the missed opportunity, Ailyn, a Smithsonian behind-the-scenes volunteer, looked up from her paperwork, stretched, and ran a hand through her cinnamon hair. She flashed me a sympathetic look. “I hate VIPs,” she said with her crisp Cuban accent. “They never show up on time.”

Five minutes later, I finally heard high heels clicking down the linoleum hallway. I plastered my smile back into place as the museum director’s assistant ushered our guests through the door. I tried to hide my disappointment when I realized that Martin Heinrich, the junior senator from New Mexico and the VIP I had been waiting for, would not be joining us after all. In a whirlwind of handshakes, I tried to catch names and identifying reference points. I already knew that the gentleman was an architect who had the ear of the museum’s director, but who were the two older women? I didn’t know and would never find out, but they seemed curious and eager nonetheless, so I started my tour.

In preparation for the VIPs’ visit, I’d removed some of my favorite objects from their crowded storage cases in the locked area adjacent to the more welcoming and slightly less cluttered reference room. I’d carefully arranged the artifacts on two government-issued 1960s green metal tables for inspection, but before I could don my purple nitrile gloves and make my introductions, the architect made a beeline for a set of blueprints in a soft leather binding. He smiled as he recognized the Empire State Building. Floor by floor, the names of the business tenants—some of them still familiar, but many more long forgotten—revealed themselves as he flipped through the pages. I had playfully left the book open to the forty-ninth floor, where the Allison Manufacturing Company had occupied the northeast corner of the building. Maybe it would help him remember my name.

“Do you know who did these drawings?” he asked me.

“Morris Jacks, a consulting engineer,” I offered. “He made them in 1968 as part of a tax assessment. He calculated the building’s steel to have a lifetime of sixty years.”

All three visitors quickly turned to me with worried looks. The Empire State Building was built in the 1930s. Was it going to collapse?

“Don’t worry. The steel itself is fine!” I assured them. “The Empire State Building isn’t going to fall down. Jacks was actually estimating the building’s social lifespan. He thought that after sixty years, New York City would need to replace it with something more useful.” As the visitors laughed in disbelief, I sensed an opening to start my pitch, to make the argument I’d been waiting six months to make. “Engineering drives change in America, and history can show the social implications of those technological choices. Museums have the power to—”

“Wait. What’s this?” One of the women was pointing to a patent model occupying the center of the table. She bent down to get a closer look at the three brass fans mounted in a row on a block of rich brown mahogany. They were miniature windmills, only six inches tall. Their blades were so chunky that it was hard to see the spaces where the wind would have threaded through them—a far cry from the sleek, razor-thin arms whipping around by the hundreds on today’s wind farms. But follow the wires jutting out of their backs, and you could see what made this patent model special: a battery pack. Inventor and entrepreneur Moses G. Farmer was proposing a method to store power generated by the wind. If the model were full-sized, that battery could power your house.

“It’s a way to keep the lights on even when the wind isn’t blowing,” I explained, hoping she might be in a position to report back to Senator Heinrich, who I knew served on the Senate Committee on Energy and Natural Resources. As the only engineer in the Senate, he would understand the technical difficulties of an intermittent power supply—a major hurdle in the clean energy industry. When I told the woman that Farmer was the first person to submit a patent application trying to solve the problem—in 1880—her jaw dropped.

Engineering is just one of these collections. Formally established in 1931, the collection predates the 1964 founding of the National Museum of History and Technology (the precursor to today’s NMAH). But today, its objects are routinely overlooked when the remaining curators plan exhibits.

One of the guests spotted our intern, Addie, quietly working at another desk tucked in a corner of the reference room, having been temporarily displaced by the impromptu tour. The group gathered around her cramped desk, marveling at her minute penmanship as she diligently numbered the delicate edges of mid-twentieth-century teacups arranged in neat rows on white foam board. No one had difficulty imagining well-dressed ladies sipping from them aboard a Cunard Line cruise across the Atlantic.

In our remaining time, my tour group barely let me get a word in edgewise—a few details about any object started them chattering excitedly to each other. Before I could lead the party into the back storerooms, where the real treasures were kept, the museum director himself arrived, commandeered the group, and swept them out the door with a practiced authority and a mandate to keep them on schedule. My half-hour tour of NMAH’s mechanical and civil engineering collection had been slashed to fifteen minutes. Their voices drifted as they walked down the hall. “What an amazingly intelligent staff you have,” one of the guests remarked.

The compliment was bittersweet, at best. None of us was actually on staff at NMAH. Ailyn was a volunteer who gave her time in retirement to help the Division of Work and Industry keep its records in order. Addie had only two more weeks left in her unpaid internship; she had recently graduated with an undergraduate history degree and was looking for a permanent job. I was a research fellow on a temporary leave from my faculty position in the history department at the University of South Carolina, a job to which I’d be returning in another month. That was what I had been dying to tell them: the NMAH engineering collection doesn’t have a curator, and it won’t be able to get one without help from powerful supporters, like a senator who could speak up for engineers. My guests had been so engaged with what they were seeing that I didn’t even have a chance to tell them that they were catching a rare glimpse into a collection at risk.

Since its founding in 1846, the Smithsonian Institution has served as the United States’ premier showplace of wonder. Each year, more than 30 million visitors pass through its twenty-nine outposts, delighting at Dorothy’s ruby slippers, marveling at the lunar command module, and paying their respects to the original star-spangled banner. Another 140 million people visited the Smithsonian on the Web last year alone (and not just for the National Zoo’s panda cam). Collectively, the Smithsonian preserves more than 137 million objects, specimens, and pieces of art. It’s the largest museum and research complex in the world.

But behind the brilliant display cases, the infrastructure is starting to crack. The kind of harm being done to the Smithsonian’s collections is not the quick devastation of a natural disaster, nor the malicious injury of intentional abandonment. Rather, it’s a gradual decay resulting from decades of incremental decisions by directors, curators, and collections managers, each almost imperceptibly compounding the last. Over time, shifting priorities, stretched budgets, and debates about the purpose of the museum have resulted in fewer curators and neglected collections.

In 1992, NMAH employed forty-six curators. Twenty years later, it has only twenty-one. Frustrated, overworked, and tasked with the management of objects that fall far beyond their areas of considerable expertise, the remaining curators can keep their heads above water only by ignoring the collections they don’t know much about. These collections become orphans, pushed deeper into back rooms and storage spaces. Cut off from public view and neglected by the otherwise occupied staff, these orphaned collections go into a state of suspended animation, frozen in time the day they were forgotten. With no one to advocate for them, their institutional voices fade away.

Engineering is just one of these collections. Formally established in 1931, the collection predates the 1964 founding of the National Museum of History and Technology (the precursor to today’s NMAH). But today, its objects are routinely overlooked when the remaining curators plan exhibits. If you wanted to tour the collection, you’d have to be a senator-level VIP or have friends on the inside. Even established scholars have trouble making research-related appointments. There’s simply no one available to escort them into the collections. What’s more, no one is actively adding to the collection, leaving vast swaths of the late twentieth and early twenty-first centuries—inarguably, an essential time for engineering and technology—unrepresented in the United States’ foremost museum of history.

It is difficult to trace the curatorial crisis back to a single origin. The Smithsonian’s budget is labyrinthine: federal money funds much of the institution’s infrastructure and permanent staff positions, while private donations finance new exhibitions and special projects. A central fund is spread across the museums for pan-institutional initiatives, but each individual museum also has its own fundraising goals. As with any cultural institution, federal support fluctuates according to the political whim of Congress, and charitable donations are dependent on the overall health of the nation’s economy, the interests of wealthy donors, and the public relations goals of (and tax incentives for) corporations. Museum directors have to juggle different and sometimes conflicting priorities—including long-term care of the collections, public outreach, new research, and new exhibits—each fighting for a piece of a shrinking budgetary pie. In fiscal year 2013 alone, sequestration cut $42 million, or 5 percent, of the Smithsonian’s federal funding, forcing the institution to make painful choices. Without money, the Smithsonian can’t hire people. Without people, the Smithsonian can’t do its job.

For my part, I was one of about a dozen temporary fellows brought in to NMAH last year thanks to funding from Goldman Sachs. Spread out across the museum’s many departments, we were supposed to help “broaden and diversify the museum’s perspective and extend its capabilities far beyond those of its current resources.” For six months, I would take on the work of a full-time curator and try, however briefly, to stem the tide of collective forgetting.

When I arrived at NMAH in January of 2013, I felt as if I were reuniting with old friends. I had spent significant time in the engineering collection as a graduate student while researching my dissertation on the history of factory tours. Now, many years later, I spend much of my time in the classroom, lecturing to uninterested undergrads on the history of technology or pushing graduate students to think more broadly about the purpose of museums. I was looking forward to the chance to get my hands dirty, inspecting artifacts and doing real museum work.

I had underestimated the actual level of dirt. Despite the collections manager’s best efforts, construction dust from a building improvement project had seeped through the protective barriers and coated archival boxes filled with reference materials. As I pulled items off the shelves, puffs of it wafted up to my nose. My eyes watered and my nose itched as I wiped away the new dust that had silently settled upon the old. The strata of grime made the exercise nearly archaeological. My allergist would have been horrified.

On red letter days, I rediscovered national treasures that had spent years buried under the dust: the Empire State Building blueprints, for example, or the original plans for Grand Central Terminal. But I spent most days sifting through an onslaught of the mundane—the national rebar collection, engine indicators, or twenty years’ worth of one engineer’s daily planners. I began to better appreciate why no one had been eager to take on the task of sorting through all these shelves full of obsolete ideas. One typical spring morning, I slid open the glass doors of a storage case and pulled out sixteen patent models of boiler valves.

Trying to understand why, exactly, the museum had gone to the trouble of collecting so many boiler valves, I thumbed through Bulletin 173 of the United States National Museum, a catalog of the mechanical collections compiled back in 1939 to document the work done by the Division of Engineering’s founding curators. It appeared they were trying to document the progress in standardizing safety laws. The nineteenth century was the Age of Steam, giving rise to the locomotives and factories of the Industrial Revolution. Steam boilers generated a tremendous amount of power, but they were also notoriously treacherous. Frequent explosions often killed workers, but no legal codes existed to regulate construction and operation of pressure vessels until 1915, over one hundred years after they came into widespread use. In the rows upon rows of boiler valves on the shelves in front of me, I began to glimpse years of work by countless engineers desperately trying—with varying degrees of success—to devise a technological fix to a serious problem.

Curators are custodians of the past, but they must also collect the present in anticipation of the future. They grab hold of ideas, like the increasing importance of workplace safety at the tail end of the Industrial Revolution, and attempt to illustrate them with physical objects, like the boiler valves. Curators can’t always predict which new technologies or interests will guide future research, but they can nevertheless preserve the potential of a latent information and make it available to its future interpreters. Long after such artifacts have been rendered obsolete in the world outside the museum, the curators at NMAH hold them in trust for the American public.

I lined up all of the boiler valves in a row on a table in the reference room. At one point, they had told a story—a story those founding curators had hoped to share with audiences reaching far into the future—and my job now was to make sure that story could be rescued and retold by the curators and visitors whom I hoped would come after me. With minimal written information about the valves’ importance and provenance, I sat down to do what I was trained to do: read the objects.

Some looked like water faucets; others like Rube Goldberg contraptions. I picked up a bronze model that, with its cylindrical air chamber and protruding nozzles, vaguely resembled a toy water gun. The 1939 catalog indicated that this valve was patented in 1858 by a man named Frick. He had designed it to be an all-in-one machine, combining a valve with nozzles to relieve dangerous levels of pressure inside the boiler, an alarm to warn of any failure to do so, and water jets to extinguish the resulting fires. I updated the database with this information, wondering what type of inventor Frick was. I wondered if he felt the weight of his failure every time he read about another deadly boiler explosion in the newspaper. Maybe he smiled and patted himself on the back each day he didn’t. Finally, I clicked save and pushed the entry into the queue for legal review so that it could be put in the Smithsonian’s online collection database. Then I moved onto the next mute artifact waiting for me to give it a voice.

Curators are custodians of the past, but they must also collect the present in anticipation of the future. They grab hold of ideas, like the increasing importance of workplace safety at the tail end of the Industrial Revolution, and attempt to illustrate them with physical objects, like the boiler valves. Curators can’t always predict which new technologies or interests will guide future research, but they can nevertheless preserve the potential of latent information and make it available to its future interpreters.

Hours later, I stepped back from my notes. I now had object-level descriptions for each one of the models. Thanks to an inventory team working under temporary contracts, they all had been recently photographed as well. With a photo and a description, the valves could be put online, their digital records available to anyone with an Internet connection. Still, online patrons won’t be able to feel the weight of these artifacts in their hands, nor will they be viscerally overwhelmed by their numbers, as I was. They will have no physical sense of the abundance of artifacts, the piles of stuff that tell the stories of countless engineers working together and on their own to solve the problems of their times.

I sighed, collected my notes, and put the valves back in storage.

How will future historians—or even just curious museum visitors—see our current technological moment represented at NMAH? If the Smithsonian’s resource crunch continues down its current path, chances are they won’t see it at all. Without a curator, a collection cannot grow and evolve. Future visitors to the engineering collection will learn about coal mining in the nineteenth century, but there will be no objects helping them understand hydraulic fracking in the twenty-first. Researchers will be able to examine building plans of every station, bridge, and tunnel on every route once covered by the Pennsylvania Railroad, but they won’t see an example of a cable-stayed bridge, a design that has dominated in recent decades. They will be able to see designs for the 1885 Milwaukee municipal water supply system, but they won’t see the much more recent inventions that keep our drinking water safe today.

What’s more, NMAH is fast approaching a generational cliff. Sixty-eight percent of the staff is eligible to retire today, but the younger guard feels like the Baby Boomers will never leave. Some Boomers haven’t been able to save enough to retire, but many more are simply dedicated to the job and are unwilling to see their lives’ work be mothballed. They can’t imagine leaving without passing the torch to a replacement—replacements they know the museum can’t afford to hire. But the alternative isn’t any better; eventually, these aging curators will die without anyone to take their places. Their collections will join the dozens of orphans the Smithsonian already struggles to care for.

The new director and deputy director of NMAH understand the challenges they have inherited and are working toward stemming the tide. Their 2013 strategic plan lists “revitalizing and increasing the staff” as one of the four main priorities for the museum. Doubting that any more public funds will be forthcoming in the near future, NMAH—reflecting a trend at the Smithsonian more generally—is staking its future on private fortunes, courting billionaires to endow curatorial positions tied to blockbuster future exhibits. That leads to a backward hiring process, in which new curators are hired only after the exhibits they are tasked with planning are underway. And because already curator-less collections, like engineering, do not currently have the institutional voice required to push for and plan the kind of blockbuster exhibits that attract these endowed positions, they are left out of both exhibit halls and development plans. They are orphaned twice over.

Almost daily during my six months at NMAH, I was pulled into informal conversations about the future of the museum. Over lunch or in the hallway, I talked with curators and collections managers about where we might get a grant to extend the contracts of the inventory team. Sometimes we dreamed bigger, wishing for a new floor plan for the storage areas. Imagine if we could knock down the walls and open up the rooms, put in moveable shelving, and reunite collections that had been dispersed across the museum!

On days of extraordinary frustration—when everyone just felt overworked, underpaid, and overwhelmed by the challenges the museum faces—we all secretly harbored a desire to leak insider stories to The Washington Post or the General Accounting Office. But we almost always held back. Everyone who works for the Smithsonian loves the ideals the institution aspires to. They fundamentally believe in the founding mission: “For the increase and diffusion of knowledge.” We want to be able to have frank discussions about the needs of the museum, but we worry that if we speak too loudly, we might unjustly diminish its institutional authority. Even with its faults, the Smithsonian remains a seriously amazing place. But without public advocacy—which depends on public awareness—the Smithsonian’s curatorial crisis has no hope of being solved.

In March of 2013, about halfway into my Smithsonian residency, I gave a deliberately provocative presentation at the weekly colloquium where curators and visiting researchers give talks based on their current work. Although the colloquia are open to the public and occasionally attract graduate students and professors from local universities, they are mostly insider affairs. I titled my talk “Because Engineers Rule the World” and had NMAH’s social media coordinator live-tweet it to the museum’s thousands of Twitter followers. I showcased some of my favorite objects, gave a brief history of the engineering collection, and then outlined some suggestions for what its curatorial future might look like. It’s not hard to imagine two diverging possibilities. Down one dark path, the engineering collection becomes so neglected that it cannot be resurrected. No one knows how the collected technologies once worked or why future generations should study them. The objects become pieces of Steampunk art, beautiful, perhaps, but irrelevant to working engineers. A brighter future, on the other hand, might feature additional curatorial staff, perhaps funded by engineering professional societies or major companies. I had already tried to tip the scales in favor of engineering by offering NMAH’s development officer a list of Global 100 companies, highlighting objects of theirs that we had in the collection. We were preserving their history. The least they could do, it seemed, was help us pay for it.

The Smithsonian is not where we store the remnants of what we have forgotten. It’s where we go when we want to remember. Its curators help us access and interpret our country’s collective memory.

Robert Vogel, one of the last curators of engineering, now retired, sat staunchly in the audience listening to my talk. During the Q&A, he reminisced about the glorious engineering exhibits of the past, and I realized almost all of them had been dismantled and shipped to off-site storage over the past two decades. The last remaining public homage to engineering, the Hall of Power Machinery, sits forlornly in a dimly lit, uninviting back corner of the first floor’s east wing. For the intrepid visitors who wander into the space, the objects give a glimpse into the technological history of the Industrial Revolution. But since the exhibit cases are so old that they do not meet current safety standards, they cannot even be opened to change the long burnt-out light bulbs.

The week after my colloquium talk, I participated in the Smithsonian’s (somewhat) annual April Fool’s Day Conference on Stuff. This year’s theme was “grease”; it seemed only fitting that the engineering collection, which is filled with big greasy things, should be represented. I gave a tongue-in-cheek presentation on the history of gun control—grease guns, that is. In a seven-minute romp through the collections, I talked about jars of grease, bottles of grease, cans of grease, cups of grease, tubes of grease, and even two pots of grease that have been missing since the Congressionally mandated inventory of 1979 (they were in the vintage database, but I never could track them down). From the grease gun that came with the 1903 Winton convertible to the WWII-era M3 submachine gun, which earned the affectionate nickname “the greaser,” I drew attention to objects that had not been out of storage in decades.

Taken together, these two public talks created a momentum that knocked me out of my host department and propelled me into conversations with curators in art, medicine, education, programming, and others. My colleagues laughed at the April Fool’s Day lecture but were also astonished by the objects I’d uncovered and the latent connections that were just waiting to be made across the museum. I’d made my point: they could see what was being missed without a curator, what stories weren’t being told. More important, perhaps, my public acknowledgment that the engineering collections were orphaned unleashed a flood of response from museum staff. A curator in the Division of Medicine and Science stopped me in the hall to tell me, “Ten years without a curator? That’s nothing. We have collections that haven’t been curated in decades.”

With only a few weeks left in my fellowship, I decided I had to make a move. I drafted a two-line email, agonizing over every single word. Finally, I took a deep breath and hit send. Instant second-guessing. The Smithsonian is an institution with a deep hierarchical arrangement of authority, and I had just pole vaulted over four levels of supervisors, directors, and undersecretaries to email Dr. Wayne Clough.

Clough, a civil engineer by training, was president of Georgia Tech before accepting his appointment as the twelfth-ever Secretary of the Smithsonian. If I was going to make a case on behalf of the engineering collection, I figured now was the time. I invited him for a tour; somewhat to my surprise, he happily accepted. We emailed back and forth about possible dates before Grace, his scheduler, stepped in with a definitive time. May 30, 3 p.m.

My gamble had paid off, so I decided to roll the dice again and emailed Senator Heinrich. Jackpot! Catherine, the Senator’s scheduler, said he would love to stop by, barring any committee calls for votes.

I had about two minutes to daydream about the perfect meeting—three engineers getting together to talk shop. I imagined a conversation about how our engineering backgrounds prepared us for our current jobs, which were not technical at all. I thought we could talk about who today’s engineering rock stars are—people, companies, and ideas that NMAH should be collecting. We could discuss STEM education initiatives and how a national engineering collection could be featured within a history museum.

Then a flood of emails rushing into my inbox knocked me back to political reality. First came a message from the Smithsonian’s Office of Government Relations, then an email from the administrative assistant for the Director of Curatorial Affairs, then another from the scheduler for NMAH’s director—all grabbing for control over my tour. My frank conversation with fellow engineers was turning into a three-ring circus of administrators. But when I complained to a curatorial friend from one of the off-the-Mall Smithsonian museums, she shot back, “Are you kidding? These government relations people are exactly the ones you need to be talking to!” After all, she explained, they don’t get to spend just fifteen minutes with a single senator—they get to highlight institution-wide issues in reports that are read by all of Congress. “Now that you’ve got their attention,” my friend said, “you need to let them know engineering is being forgotten. Then maybe they can get the gears working to hire a curator.”

A week before the scheduled tour, I opened my inbox to heartbreak. “Dr. Clough will now be out of town next week,” Grace wrote. “Could we reschedule for June 6?”

For me, June 6 was too late. By then, my fellowship would be over. I’d have resumed my regular life as a professor, teaching a six-week summer course on comparative public history in England. In fact, the whole storeroom would be devoid of people. The contract for the collections inventory team had not been renewed, so no one would be able to finish imaging the objects and creating database entries. The engineering collections would return to hibernation.

But I wasn’t willing to let it go quite yet. I managed to arrange a meeting for late July, when I would be returning to D.C. for just a few days. I was determined to make one last plea for the collections.

When my plane landed at Dulles late on the Saturday night before the rescheduled tour, I turned on my phone and checked voicemail for the first time in six weeks. I found numerous messages from my Smithsonian supervisor, left in increasing levels of panic. “You’d better come in a day early to set things up,” he sighed when I finally got a hold of him the next day.

So that Wednesday, still a bit jet-lagged, I boarded the Metro and made the familiar trip to NMAH. I had been gone for less than two months, but my I.D. card had already expired, so my supervisor had to meet me in the lobby and escort me up the elevator to the staff-only floors. “Better watch out,” he warned me. “You’ve upset quite a number of people.”

The NMAH’s top curators had already decided my tour would not take place in engineering’s shabby storeroom. Rather, our VIPs would be the first to see a newly renovated, state-of-the-art showroom down the hall. Intended to house the agriculture and mining technologies collections (and completed before the funds could be poached to keep the museum open during sequestration), the new room was icy—not because they had managed appropriate climate control (a constant challenge for all museums), but because the collections manager was downright hostile. My shenanigans had meant that she had to vacuum away every speck of dust and generally eliminate any sign that this was an active workplace.

Outside in the corridor, I overheard two staff members talking about me: “Who does she think she is? Emailing the Secretary! Inviting a senator!” The privileges I had enjoyed as a pesky but eager Smithsonian insider were long gone. Now, my former colleagues saw me as merely a graceless interloper, ignoring rules I didn’t like, calling attention to workers who preferred to keep their heads down (and off the chopping block), and creating extra work for them in the process.

Before the tour, the Director of Curatorial Affairs gave me strict orders: under no circumstance was I allowed to speak for the future of NMAH’s collections. I wasn’t allowed to make any suggestions about what the museum should be collecting or what its direction should be. I wasn’t allowed to say anything that might prompt a question for which the secretary didn’t have an answer. If the senator asked me what he could do to help, I was not allowed to say. If asked directly, I should defer to someone with authority to speak to those issues.

Humbled but still nervously excited, I reported to the staff entrance the next day to meet my guests. John Gray, the director of NMAH, was already there, along with the personal assistant to the museum’s director of curatorial affairs and the Smithsonian Institution’s legislative affairs liaison. I was pleased to see that Gray was beaming. “So you’re the one who made this happen?” he asked, shaking my hand. I started to apologize about overstepping, but he cut me off. “Nonsense. This is the kind of thing we need to do more often. And why aren’t we showing them our old storerooms? They need to see our challenges.”

Secretary Clough arrived right on time with his suit jacket folded over his arm, a concession to D.C.’s brutal summer heat. All of the men quickly slipped out of their own jackets in show of deference to his authority. After quick introductions, we settled into a light-hearted banter. Twenty minutes later, Senator Heinrich arrived with his assistant, apologetic for his tardiness but still enthusiastic. “Have you ever been to the National Museum of American History?” I asked. “No? Well, you’re in for a special treat.”

Sure enough, when we walked into the gleaming showroom, the senator was like a kid in a candy store. Objects from the Panama Canal, a piece of the levee wall that collapsed during Hurricane Katrina, microcomputers—it was an engineering treasure trove. Heinrich immediately started examining Farmer’s windmill-battery patent model, trying to see how it worked.

Exactly six minutes into the tour, the senator’s assistant interrupted us. “A vote has just been called. We need to leave. Now.”

As Senator Heinrich started rushing down the hall, he called out, “When can I come back with my kids?”

“Anytime!” I yelled after him, not mentioning that I wouldn’t be there to show them around.

I returned to the showroom, where the Secretary had been talking to the Director and other assembled staff members. He turned to me and asked, regarding a slightly ragged pile of papers, “Why those?”

I confessed that even though it wasn’t very flashy, it was one of my favorite objects. “In 1903, an engineer took a drafting course by correspondence school. Those are his notes. I think it offers insightful parallels to today’s debates over online education.”

Secretary Clough flipped through the exercises, reminiscing about his own drafting coursework. Like the student from the past, he was always being reprimanded for his imprecise handwriting and varyingly thick lines. When he leaned forward to take a closer look at the school’s name, a smile slowly spread across his face. “International Correspondence School. ICS. That’s how my dad got his degree. He worked on banana boats out of New Orleans. Back and forth to South America, he decided to earn a certificate in refrigeration and marine engine maintenance.”

He stepped back and looked directly at me. “What can I do to help engineering?”

I smiled, biting my tongue. “You need to talk to the director of curatorial affairs about that, sir.”

Barely a week after the tour, Gray forwarded an email from Clough to NMAH’s entire staff. In response to upheaval in the federal budget negotiations, the secretary instituted a hiring freeze across all of the Smithsonian’s museums. It was scheduled to last for ninety days, but everyone knew it would likely drag on much longer. In September, Clough announced he would retire in the fall of 2014, meaning the curatorial crisis will have to be handled by the next Secretary.

By chance, I had the opportunity to talk to Secretary Clough again. While attending a workshop sponsored by the Museum Trustee Association last fall, I slipped out of a less-than-stimulating session on institutional endowments. As I wandered through the halls of Dumbarton House, an eighteenth-century house in the heart of Georgetown, I spotted Clough, who was scheduled to give the lunchtime keynote address. He had recently announced his resignation from the Smithsonian, so I asked him what the recruiting firm was looking for in his replacement. He smiled and said nonchalantly, “You know, the usual. Someone who walks on water.”

It was October 4, 2013, and Clough began his address to the group by noting that the Smithsonian is usually open 364 days a year. The government shutdown had already closed the museum for four days, which ultimately stretched to over two weeks. Clearly, there is no returning to a mythic Golden Age when Congress could supply adequate funding for the Smithsonian’s needs, and dreaming of such a thing would be counterproductive. But, with a wry smile, Clough noted that coming up with an innovative solution would now be a job for his replacement.

The truth is, no matter how much Clough or his successor loves engineering (or any other collection in the Smithsonian’s nineteen museums, nine research centers, and one zoo), there’s very little the secretary can do to save them. The slow creep of the curatorial crisis is not something one person, no matter how powerful within the institution or the government, can reverse quickly. Faced with shrinking budgets and a dwindling staff, the Smithsonian’s curators will almost certainly have to come up with new ways of doing their jobs, but their fundamental tasks, no matter the budget constraints or the day-to-day challenges, will remain the same. The Smithsonian is not where we store the remnants of what we have forgotten. It’s where we go when we want to remember. Its curators help us access and interpret our country’s collective memory, whether that means organizing exhibits of physical objects, creating digital databases, or just dusting off overlooked artifacts and writing down what they see. Curators are worth fighting for. They help us remember, and they don’t let us forget.

Allison Marsh (marsha@mailbox.sc.edu) is an assistant professor of history at the University of South Carolina, where she oversees the museums and material culture track of the public history program. Lizzie Wade (lizziewade@outlook.com) is the Latin America correspondent for Science magazine. Her writing has also appeared in Aeon, Wired, and Slate, among other publications. She lives in Mexico City.

56

Aspidonia Ernst Haeckel in his work Kunstformen der Natur (1899–1904), grouped together these specimens, including trilobites (which are extinct) and horseshoe crabs, so the viewer could clearly see similarities that point to the evolutionary process.

Little Cell, Big Science: The Rise (and Fall?) of Yeast Research

NIKI VERMEULEN

MOLLY BAIN

Trying to add another chapter to the long history of yeast studies, scientists at the cutting edge of knowledge confront the painful realities of science funding.

Manchester, the post-industrial heart and hub of north England, is known for football fanaticism, the pop gloom and swoon of The Smiths, a constant drizzle of dreary weather, and—to a select few—the collaborative work churning in an off-white university building in the city’s center. Given the town’s love of the pub, perhaps it’s a kind of karma that the Manchester Centre for Integrative Systems Biology (MCISB) has set up shop here: its researchers are devoted to yeast.

Wayne Aubrey, a post-doc in his early thirties, with a windswept foppish bob and round, kind brown eyes, is one of those special researchers. Early on a spring morning in 2012, Wayne moved swiftly from his bike to a small glass security gate outside the Centre. Outdoorsy and originally from Wales, he’d just returned from a snowboarding trip, trading in the deep white of powder-fresh mountains for the new, off-white five-story university outpost looming in front of him.

The building, called the Manchester Interdisciplinary Biocentre (MIB), features an open-plan design, intended to foster collaboration among biochemists, computer scientists, engineers, mathematicians, and physicists. The building also hosts the Manchester Centre—the MCISB—where Wayne and his colleagues work. Established in 2006 and funded by a competitive grant, the MCISB was intended to run for at least ten years, studying life by creating computer models that represent living organisms, such as yeast. At its height in 2008, the multidisciplinary Centre housed about twelve full-time post-docs from a wide array of scientific fields, all working together—an unusual, innovative approach to science. But in the age of budget cuts and changing priorities, the funding was already running out after only six years, leaving the Centre’s work on yeast, the jobs of Wayne’s colleagues, and Wayne’s own career path hanging in the balance.

Nevertheless, for the moment at least, there was work to be done. Wayne climbed three flights of stairs and grabbed his lab coat. Wayne’s background, like that of most of his colleagues at the Centre, is multidisciplinary: before coming to Manchester, he was involved in building “Adam,” a robot scientist that automates some experiments so human scientists don’t have to conduct and run each one. This special background in both biology and computer science landed Wayne the job in the Manchester Centre.

But on this day, his lab work was that of a classic “wet biologist.”

As he unstacked a pile of petri dishes into a neat line, Wayne reported with a quick smile, “We grow a lot of happy cells.” The lab is like a sterile outgrowth of or inspiration for The Container Store: glass and plastic jars, dishes, bottles, tubes, and flasks, all with properly sized red and blue sidekick lids, live on shelves above rows of lab counters. Tape and labeling stickers protrude somewhat clumsily from various drawers and boxes; they get a lot of use. Small plastic bins, like fish tackle boxes with near translucent tops, sit at every angle along the counters. Almost as a necessary afterthought, even in this multidisciplinary center, computer monitors are pushed in alongside of it all, cables endlessly festooned between shelves and tables.

Wayne picked up a flask of solution, beginning the routine of pouring measured amounts into the dishes’ flat disks. “The trick,” he noted, “is to prevent bubbles, so that the only thing visible in the dishes later is yeast.”

Every lab has its own romances and rhythms, its own ideals and intrigues—and its own ritualistic routines of daily prep.

As one of the organisms often used for biology, yeast has been at the lab bench for centuries. You could say it’s the little black dress of biology labs. Biologists keep it handy because, as a simple, single-cell organism, yeast has proven very functional, versatile, and useful for all sorts of parties—if, that is, by “parties,” you mean methodical and meticulous experiments, each an elaborate community effort.

How yeast became the microbiologist’s best friend has everything to do with another party favorite (and big business): alcohol. Industrialists and governments alike paid early biologists and chemists to tinker with the fermentation process to figure out why beer and wine spoiled so easily. The resulting research tells an important story, not only about the development of modern biology, but about the process of scientific advance itself.

In modern labs, yeast is most often put to work for the secrets it can unlock regarding human health. Through new experiments and computer models, systems biologists are mapping how one cell of yeast functions as a living system. Though we know a lot about the different components that make up a yeast cell—genes, proteins, enzymes, etc.—we do not yet know how these different components interact together to comprise living yeast. The researchers working at the Manchester Centre are studying this living system, trying to understand these interactions and make them visible. Their ultimate goal: to create a computer model of yeast that can show how the different elements interact. Their hope is that this model of yeast will shed light on how more complex systems—a heart, say, or a liver, or even a whole human being—function, fail, survive, and thrive.

But a model of yeast would demand and have to make sense of enormous quantities of data. Next to the petri dishes, Wayne set up pipettes, flasks, and a plastic tray, his behavior easy and measured. But this experiment, complicated and time-sensitive, couldn’t be performed solo. Wayne was waiting for Mara Nardelli, his lab partner and fellow yeast researcher. When she arrived, they would begin a quick-paced dance from dish to dish, loading each with sugar to see exactly how quickly yeast eats it. They hoped the experiment would give them clear numerical data about the rate of yeast’s absorption of sugar. If it were to match other findings, then perhaps it might be useful to Wayne and Mara’s other lab mates—fellow systems biologists who were attempting to formulate yeast and its inner workings into mathematical terms and then translate those terms into visual models.

Devising a computerized model of yeast has been the main goal of the Manchester Centre—but after six years, it seems, it’s still a ways off. Farther off, to be honest, than anyone had really expected. Yeast seemed fairly straightforward: it is a small, simple organism contained completely in one cell. And yet, it has proved surprisingly difficult to model. Turns out, life—even for as “simple” an organism as yeast—is very complex.

It seems odd, Wayne conceded, that after hundreds of years of research, we still don’t know how yeast works. “But,” he countered, “yeast remains the most well understood cell in the world. I know more about yeast than about anything else, and there are probably as many yeast biologists in the world as there are genes in yeast—which is six thousand.” The sheer number of genes helped Wayne underscore yeast’s complexity: “To fully map the interactions, you have to look at the ways in which these 6,000 genes interact, so that means (6000×6001)/2, which is over 18 million potential interactions. You cannot imagine!” And the complexity doesn’t stop there, Wayne continued: “Now you have the interactions between the genes, but you would still need to add the interactions of all the other components of the yeast cell in order to create a full model.” That is, you’d have to take into account the various metabolites and other elements that make up a yeast cell.

Despite the travails of the yeast modelers, the field of systems biology still hopes to model even more complex forms of life to revolutionize medicine: British scientist and systems biology visionary Denis Noble has had a team working on a virtual model of the heart for more than twenty years, while a large national team of German researchers, funded by the German Federal Ministry, is developing a virtual liver model. Yet, if we cannot even understand how a single-cell organism functions as a living system, one might reasonably ask whether we can safely scale up to bigger, more complex organisms. Could we really—as some scientists hope—eventually model a complete human being?

Moreover, will we be able to use these models to improve human health and care? Will systems biology bring us systems medicine? The American systems biologist Leroy Hood, who was recently awarded the prestigious National Medal of Science by President Obama, has called this idea “P4 medicine”—the p’s standing for Predictive, Preventive, Personalized, and Participatory. He imagines that heart and liver models could be “personalized,” meaning that everybody would have his or her own model, based on individual genetic and other biological information. Just as you can now order your own genetic profile (for a price), you would be able to have a model of yourself, which doctors could use to diagnose—or even “predict”—your medical problems. Based on the model, “preventive” measures could also be taken. And it’s not only medical professionals who could help to cure and prevent disease, but the patient, too, could “participate” in this process.

For instance, in the case of heart diseases, a model could more clearly show existing problems or future risks. This could help doctors run specific tests, perform surgery, or prescribe medicine; and patients themselves could also use special electronic devices or mobile phone apps to measure cholesterol, monitor their heartbeat, and adjust eating patterns and physical activity in order to reduce risks. Or a patient could use a model of his or her liver—the organ that digests drugs—to determine what drugs are most effective, what sort of dose to take, and at what time of the day. Healthcare would increasingly become more individually tailored and precise, and people could, in effect, become their own technology-assisted doctors, managing their own health and living longer, healthier lives because of it.

It sounds pretty amazing—and if the systems biologists are right about our ability to build complex models, it could be reality someday. But are they right, or are they overly optimistic? Researchers’ experience with yeast suggests that a personalized model of your liver on ten years of antidepressants and another of your heart recovering from invasive valve repair could be further off than we’d like. They might even be impossible.

It was turning out to be hard enough to model the simple single-cell organism of yeast. But doing so might be the crucial first step, and the yeast researchers at the Centre weren’t ready to give up yet. Far from it. As Wayne finished setting up the last of the petri dishes, Mara walked in. Golden-skinned, good-natured, and outgoing, she was the social center of the MCISB. For a moment, she and Wayne sat at the lab bench, reviewing the preparations for the day’s experiment in a sort of conversational checklist.

Then Mara stood, tucking away her thick Italian curls and looking over the neatly arranged high-tech Tupperware party. She sighed and turned back to Wayne. “Ready?”

People had been using yeast—spooning off its loamy, foamy scum from one bread bowl or wine vat and inserting it in another—for thousands of years before they understood what this seething substance was or what, exactly, it was doing. Hiero-glyphs from ancient Egypt already suggested yeast as an essential sidekick for the baker and brewer, but they didn’t delineate its magic—that people had identified and isolated yeast to make bread rise and grape juice spirited was magic enough. As the great anatomist and evolutionary theory advocate Thomas Henry Huxley declared in an 1871 lecture, “It is highly creditable to the ingenuity of our ancestors that the peculiar property of fermented liquids, in virtue of which they ‘make glad the heart of man,’ seems to have been known in the remotest periods of which we have any record.”

All the different linguistic iterations of yeast—gäscht, gischt, gest, gist, yst, barm, beorm, bären, hefe—refer to the same descriptive action and event: to raise, to rise, to bear up with, as Huxley put it, “‘yeasty’ waves and ‘gusty’ breezes.” This predictable, if chaotic and muddy, pulpy process—fermentation—was also known to purify the original grain down to its liquid essence—its “spirit”—which, as Huxley described it, “possesses a very wonderful influence on the nervous system; so that in small doses it exhilarates, while in larger it stupefies.”

Though beer and wine were staples of everyday living for thousands and thousands of years, wine- and beer-making were tough trades—precisely because what the gift of yeast was, exactly, was not clear. Until about 150 years ago, mass spoilage of both commercial and homemade alcoholic consumables was incredibly common. Imagine your livelihood or daily gratification dependent on your own handcrafted concoctions. Now, imagine stumbling down to your cellar on a damp night to fetch a nip or a barrel for yourself, your neighbors, or the local tavern. Instead you’re assaulted by a putrid smell wafting from half of your wooden drums. You ladle into one of your casks and discover an intensely sour or sulfurous brew. In the meantime, some drink has sloshed onto your floor, and the broth’s so rancid, it’s slick with its own nasty turn. What caused this quick slippage into spoilage? This question enticed many an early scientist to the lab bench—in part because funding was at the ready.

In a 2003 article on yeast research in the journal Microbiology, James A. Barnett explains that because fermentation was so important to daily life and whole economies, scientific investigations of yeast began in the seventeenth century and were formalized in the eighteenth century, by chemists—not “natural historians” (as early biologists were called)—who were originally interested in the fermentation process as a series of chemical reactions.

How yeast became the microbiologist’s best friend has everything to do with another party favorite (and big business): alcohol.

In late eighteenth-century Florence, Giovanni Valentino Fabbroni was part of the first wave of yeast research. Fabbroni—a true Renaissance man who dabbled in politics and electro-chemistry, wrote tomes on farming practices, and helped Italy adapt the metric system—determined that in order for fermentation to begin, yeast must be present. But he also concluded his work by doing something remarkable: Fabbroni categorized yeast as a “vegeto-animal”—something akin to a living organism—responsible for the fermentation process.

Two years later, in 1789 and in France, Antoine Lavoisier focused on fermentation in winemaking, again regarding it as a chemical process. As Barnett explains, “he seem[ed] to be the first person to describe a chemical reaction by means of an equation, writing ‘grape must = carbonic acid + alcohol.’” Lavoisier, who was born into the aristocracy, became a lawyer while pursuing everything from botany to meteorology on the side. At twenty-six, he was elected to the Academy of Sciences, bought part of a law firm specializing in tax collection for the state, and, while working on his own theory of combustion, eventually came to be considered France’s “father of modern chemistry.” The French government, then the world’s top supplier of wine (today, it ranks second, after Italy), needed Lavoisier’s discoveries—and badly, too: France had to stem the literal and figurative spoiling of its top-grossing industry. But as the revolution took hold, Lavoisier’s fame and wealth implicated him as a soldier of the regime. Arrested for his role as a tax collector, Lavoisier was tried and convicted as a traitor and decapitated in 1794. The Italian mathematician and astronomer Joseph-Louis Lagrange publicly mourned: “It took them only an instant to cut off his head, and one hundred years might not suffice to reproduce its like.”

Indeed, Lagrange was onto something: the new government’s leaders were very quickly in want of scientific help for the wine and spirits industries. In 1803, the Institut de France offered up a medal of pure gold for any scientist who could specify the key agent in the fermenting process. Another thirty years passed before the scientific community had much of a clue—and its discovery tore the community apart.

By the 1830s, with the help of new microscope magnification, Friedrich Kützing and Theodor Schwann, both Germans, and Charles Cagniard-Latour, a Frenchman, independently concluded that yeast was responsible for fermenting grains. And much more than that: these yeasts, the scientists nervously hemmed, um, they seemed to be alive.

Cagniard-Latour focused on the shapes of both beer and wine yeasts, describing their cellular bulbous contours as less like chemical substances and more resembling organisms in the vegetable kingdom. Schwann pushed the categorization even further: upon persistent and continued microscopic investigations, he declared that yeast looks like, acts like, and clearly is a member of the fungi family—“without doubt a plant.” He also argued that a yeast’s cell was essentially its body—meaning that each yeast cell was a complete organism, somewhat independent of the other yeast organisms. Kützing, a pharmacist’s assistant with limited formal training, published extensive illustrations of yeast and speculated that different types of yeast fermented differently; his speculation was confirmed three decades later. From their individual lab perches, each of the three scientists concluded the same thing: yeast is not only alive, but it also eats the sugars of grains or grapes, and this digestion, which creates acid and alcohol in the process, is, in effect, fermentation.

This abrupt reframing of fermentation as a feat of biology caused a stir. Some chemist giants in the field, like Justus von Liebig, found it flat out ridiculous. A preeminent chemistry teacher and theorist, von Liebig proclaimed that if yeast was alive, the growth and integrity of all science was at grave risk: “When we examine strictly the arguments by which this vitalist theory of fermentation is supported and defended, we feel ourselves carried back to the infancy of science.” Von Liebig went so far as to co-publish anonymously (with another famous and similarly offended chemist, Friedrich Wöhler) a satirical journal paper in which yeasts were depicted as little animals feasting on sugar and pissing and shitting carbonic acid and alcohol.

Though he himself did little experimental research on yeast and fermentation, von Liebig insisted that the yeasts were just the result of a chemical process. Chemical reactions could perhaps produce yeast, he allowed, but the yeasts themselves could never be alive, nor active, nor the agents of change.

Von Liebig stuck to this story even after Louis Pasteur, another famous chemist, took up yeast study and eventually became the world’s first famous microbiologist because of it.

These long-term investigations into and disciplinary disputes about the nature of yeast reordered the scientific landscape: the borders between chemistry and biology shifted, giving way to a new field, microbiology—the study of the smallest forms of life.

Back in modern Manchester, Mara and Wayne danced a familiar dance. Behind the lab bench, their arms swirled in clocklike precision as they fed the yeast cells a sugar solution in patterned and punctuated time frames, and then quickly pipetted the yeast into small conical PCR tubes.

Soon, Mara held a blue plastic tray of upside-down conical tubes, which she slowly guided into the analysis machine sitting on top of the lab counter. The machine looked like the part of a desktop computer that houses the motherboard, the processor, the hard drive, its fan, and all sorts of drives and ports. It’s the width of at least two of these bulky and boxy system units, and half of it is sheen with tinted windows. Thick cables and a hose streamed from its backside like tentacles.

Biologists have a favorite joke about yeast: like men, it’s only interested in two things—sex and sugar. Wayne explained, “This is because yeast has one membrane protein, or cell surface receptor, that binds sugar and one that binds the pheromone of the other mating type.” Sugar uptake is what Mara and Wayne have been investigating: the big machine scans, measures, and analyzes how and how quickly the yeast has its way with sugar. Results appear on the front screen, translating the experiment’s results into numerical data and graphs. The results for each set of tubes are cast into a graph, showing the pattern of the yeast cells’ sugar uptake over time. Usually, the graphs show the same types of patterns—lines slowly going up or down—with little variance from graph to graph. If great discrepancies in the patterns emerge, then the scientists usually know something went wrong in the experiment. Mara and Wayne and the Centre’s “dry biologists” (those who build mathematical models of yeast with computers) hoped that understanding how yeast regulates its sugar uptake would help them better understand how the cells grow. Yeast cells grow quickly and can be seen as a proxy for human cell generation because processes in both cells are similar. So much about yeast, Wayne explained, is directly applicable to human cells. If we know more about how yeast cells work, we’ll have a better sense of how human cells function or malfunction. Understanding the development and growth of yeast cells can be translated to growth in healthy human cells, as well as in unhealthy human cells—like malignant cancer cells.

Biologists have a favorite joke about yeast: like men, it’s only interested in two things—sex and sugar.

Mara has been at the lab working with yeast for a long time. In many ways, Mara has been with the Centre before it even became the Centre. Fourteen years ago, in 1998, Mara was wooed away from her beloved Italy for a post-doc in Manchester. Even though she moved here reluctantly from Puglia, a particularly sweet spot on the heel of Italy’s boot, she became a Mancunian—an inhabitant of Manchester. Her childhood sweetheart followed her to Manchester, and together they had a son, who himself wound up at the University of Manchester. Mara even wrote a blog about Manchester living for fellow Britain-bound Italians.

Mara’s research background was in biochemistry. She obtained her PhD from the University of Naples Federico II and University of Bari, and then continued in Bari as a Fellow of the Italian National Research Council (CNR), working on gene expression in human and rat tumors. The professor with whom she worked as a post-doc in Manchester, John McCarthy, was one of the key minds behind the MIB, so she’s been with the Manchester Interdisciplinary Biocentre since its start in 2006. Within the MIB, she became part of the Manchester Centre for Integrative Systems Biology (MCISB), which was set up by Professor Douglas Kell, who—also due to Manchester’s tradition in yeast research and its dedication to the new biosciences—won a national funding competition to create a center to make a model of yeast in the computer. The MCISB quickly attracted new professors and researchers. The core group of scientists to work with yeast, comprised largely of twenty- and thirty-somethings, consisted of both “wet” scientists, who experiment with life in the lab, and “dry” scientists, who work behind computers, building and revising models based off the results from the wet scientists’ experiments. Mara supported its teaching program, mentored PhD students, and assisted post-docs like Wayne, helping to conduct experiments in the right way.

The core group of MCISB’s researchers shared, essentially, one large office not far from the lab space. This was intentional; the building’s architects had created a space that would foster innovation and discovery in biology. As Wayne described it, “It’s a very co-ed sort of approach to the building. You encounter and bump into people more frequently because of the layout of the building. You are more eager to go and speak to people, and ask them, you know, how do I do this, how do I use that, or who belongs to this piece of equipment.”

Over the years, instead of toiling away in separate labs under separate professors, the yeast researchers—working in one big room together on one unified project—felt the uniqueness of both their endeavor and community. Thursday afternoons, the team would often go out to Canal Street, in Manchester’s “gay village,” for a couple of beers. Friendships developed. Two of them even fell in love and got married. “We became a big family,” said one of them, and the others all agreed.

Beginning the long wait for their results, Mara and Wayne cleaned up the bench in an easy quiet, gathering up used petri dishes and pipettes, and nodding to each other as they went.

Louis Pasteur liked his lab in Arbois.

Unlike most nineteenth-century laboratories of world-class scientists, the Arbois lab was small and light and simple. Long, fine microscopes were pedestaled on clean, sturdy wooden tables. While working on his lab logs, Pasteur, whose neat beard across his broad face accentuated the stern downward pull of the corners of his mouth, could sit on a bowed-back chair and look out onto pastoral rolling hills, speckled with vineyards. The lab also had the great advantage of being near his family home.

Pasteur was born in Dole, in the east of France, in 1822, but when he was about five, his father moved the family south to Arbois to rent a tannery—a notoriously messy and smelly trade. The area was known for its yellow and straw wines and its perch on the Cuisance River. Pasteur spent his childhood there.

Arbois is where, as a child, Pasteur declared he wanted to be an artist. It’s where he moved back from a Parisian boarding school at the age of sixteen, declaring he was homesick. And it’s where he’d return to spend almost every summer of his adult life. Eventually, Pasteur would bury his mother, father, and three of his children—two of whom died of typhoid fever before they were ten—in Arbois. And it was inArbois—not in his lab at the prestigious École normale supérieure in Paris, nor in the lab at his university post in Lille—where he bought a vineyard and set up a lab to test his initial ideas about wine and its fermentation.

Before Pasteur developed—which is to say, patented and advocated, just like a twenty-first-century entrepreneurial scientist—“pasteurization” as a way of reducing harmful bacteria in foods and beverages, and before he introduced (and campaigned for) his “germ theory of disease,” which led him to develop the rabies vaccine, Pasteur first worked as a chemist on yeast, specifically researching the fermentation process in wine and spirits.

During the Napoleonic wars at the beginning of the nineteenth century, France’s alcohol industry was dangerously imperiled: Lavoisier, the leading fermenter scientist, had been decapitated, and Britain had cut off France’s supply of cane sugar from the West Indies. Not only were French beer-producers and winemakers, who had wheat and grapes aplenty, still struggling with spoiling yields, but now the spirits-makers had no sugar to wring into hard alcohol. So, in a serious fix, France began cultivating sugar from beets instead. This helped, but forty to fifty years later, when Lille had become France’s capital of beet production, spirits-producers and winemakers alike were still struggling with spoilage; nobody in the alcohol industry knew how to contain or control fermentation.

Lille also happened to be the place where Pasteur worked as a chemistry professor. When one of Pasteur’s students introduced his professor to his father, a spirits-man with fermentation woes, Pasteur suddenly had access and funding to get up close and personal with yeast. He began to watch and parse apart its fermentation, quickly concluding in an 1860 paper that the Berliner Theodor Schwann had been correct decades earlier: yeast was a microbe, a bacterium. In short: alive. He also argued that it was yeast that was so essential to fermentation: its “vital activity,” Pasteur argued, caused fermentation to both begin and end.

Yeast has operated a bit like an oracle over the past two hundred-plus years for many a scientist. It didn’t only convert sugars to alcohol; it also converted Pasteur from chemist to biologist.

More specifically, Pasteur became a microbiologist. The resolution of the discipline-wide fight over the nature of yeast—particularly whether or not it was “vital,” that is, “living”—helped produce two new fields: microbiology and biochemistry. It awakened the scientific community to new possibilities and questions: what other kind of life happens on a small scale, and what can be said about the chemistry of life?

Like a living organism, collaboration is a complex system, and in the absence of nourishment—that is, funding—it falls apart and breaks down.

Though Pasteur was catching all sorts of flak over it, by the early 1860s, his fermentation work also caught the attention of an aide to the emperor. The aide was increasingly concerned about the bad rap France’s chief export was accumulating across Europe. If yeast was the key actor in fermenting all alcohol, was it at all related to what most vintners at the time thought damaged and spoiled their wines—what they called l’amer, or “wine disease”? Could it be that yeast was both creator and culprit of this disease?

With a presidential commission at his back (Napoleon III was both the last emperor and first president of France), Pasteur set out on a tour of wineries across France. Though it may have been during this sojourn that Pasteur spun the line “A bottle of wine contains more philosophy than all the books in the world,” a drunken holiday this was not. Pasteur solemnly reported back to the crown: “There may not be a single winery in France, whether rich or poor, where some portions of the wine have not suffered greater or lesser alteration.”

And with this, his initial fieldwork completed, Pasteur set up shop in his favorite winery region, Jura, home to, of course, Arbois.

With his light brown eyes, framed by an alert brow and well-earned bags, Pasteur alternately gazed out the window of his rustic laboratory and then down through one of his long white microscopes. Again and again, he watched yeast cavort with grape juice in its fermentation dance. But he knew there was at least one other big player whose influence he had yet to understand fully: air.

Excluding air from the party and allowing it in methodically, he found that exposing yeast and wine to too much air inevitably invites in airborne bacteria, which break down the alcohol into acid, resulting in vinegar. (With one eye on knowledge and one on practical application, Pasteur quickly passed this information on to the vinegar industry.) Air allowed in too much riffraff. In order to keep wine fine, the event had to remain exclusive—or you needed some kind of keen agent to kill the interlopers systematically. It didn’t take Pasteur too long to identify this discriminating friend: heat. Heating the wine, slowly, to about 120°F would kill the bacteria without destroying the taste of the wine.

Vintners at first found this idea near sacrilege. Many resisted, but after their competitors who adopted the method had bigger, better yields, they quickly complied. In fact, this procedure not only revolutionized winemaking and beer-brewing, saving France’s top export and industry, it was also the beginning of the pasteurization craze and Pasteur’s further work on microorganisms as the germs that transmit infectious disease. Science, profit-making, and improved public health turned out to be mutually reinforcing, each propelling the other forward.

Mara walked over to the analysis machine to check the progress of the experiment. Points and curves had begun to appear on the screen—each yeast cell’s inner workings translated as numbers and lines.

But Mara was not happy. Comparing these results with the results of two previous experiments, she saw that the difference was big—too big. “Something must have gone wrong,” she said.

She beckoned Wayne over, and he quickly agreed. “No, this does not look how it should.”

As Pasteur could attest, those who work in labs, wearing white coats, know that experiments often do not work out. It is certainly not the sign of a bad scientist, but it does make lab work tedious. In a speech he gave in his birth place, Dole, Pasteur spoke of his father’s influence in his own lab-bound life: “You, my dear father, whose life was as hard as your trade, you showed me what patience can accomplish in prolonged efforts. To you I owe my tenacity in daily work.”

Getting things right—isolating a potential discovery, testing it, and retesting it—often requires endless attempts, dogged persistence, and the ability to endure a lot of cloudy progress. Wayne contextualized these results thus: “This happens all the time. There is a lot of uncertainty. [It’s as though] failed experiments do not exist: if experiments don’t work, they’re never published, so you don’t know. So you have to reinvent the wheel yourself and develop your own knowledge about what works and what does not. Molecular biology is not like the combustion engine—where billions of pounds have been spent to understand the influence of every parameter and variable. There are still many unknowns in biology, and established methods do not work in every instance.”

Wayne and Mara were in good company, though, which provided some comfort. Working at the Centre, they were not only a part of a community of like-minded scientists, but also a part of a global network of scientists, all working on the modeling of yeast. In 2007, the MCISB hosted and organized a “yeast jamboree,” a three-day all-nighter—a Woodstock for yeast researchers from around the world. The jamboree resulted in a consensus on a partial model of yeast (focused on its metabolism) and a paper summarizing the jamboree’s findings, which have been cited by fellow systems biology researchers more than three hundred times to date. The “yeast jamboree” was so productive that it inspired another jamboree conference—this one focused on the human metabolic system.

But despite the jamboree’s high-profile success, yeast’s unexpected level of complexity has been a source of frustration for researchers trying to model the whole of it precisely. Three years ago, the Centre’s yeast team started to address this challenge by revising their approach: instead of looking at all of yeast’s genes and determining which proteins each makes and what activity these proteins perform, the team began, first, to identify each activity and, then, research the mechanism behind it. But even with this revised tactic and the doubling down of efforts, a complete model of yeast was not yet done, and time and funding were running out. The promise of extending the grant for another five years was broken when the University, after the global financial crunch, reoriented its priorities. As a result, the funding was almost gone.

“Well,” Mara continued. “These results are clearly not what we are looking for. We have to do it all over again. When do we have time?”

A year later, on a warm evening in June 2013, Wayne was again working behind the same lab bench.

As much as he enjoyed his work, he could imagine doing other things than sitting in a lab until 10 p.m.: “Read a book, sit in the sun, go to the pub,” he shrugged then gave a wee grin. “You know, have a family.”

Wayne was not working on yeast at the moment, however; he was running a series of enzymology experiments on E. coli for a large European project, trying to finish the results in time for a meeting in Amsterdam. The European grant was covering his salary. That was the only reason he still had a job at MCISB. Other post-docs of the Centre were not so lucky. “There was so much expertise,” Wayne reflected. “It was a good group, and now it has become much smaller. Before, it was much more cohesive; it had much more of a team feel about it. Now the group is fragmented…. Everybody is working on different things and in different projects.”

Soon, Wayne would leave Manchester, too, for a lectureship at Aberystwyth University, back home in his seaside Wales. It’s a big deal accomplishment, and Wayne is grateful and thrilled for the opportunity, even as he regrets the loss of the community at MCISB.

The trouble with a burgeoning research lab that ends its work prematurely is that the institutional knowledge and expertise built up in the collaboration are hard to codify, box up, and ship elsewhere. Ideas and emerging discoveries are, in part, relationally based, dependent on the complex interactions and conversations continually rehearsed and refined in a living community—an aspect of the “scientific method” that is rarely remarked upon. These communities can be seen as “knowledge ecologies”: a community cultivates a particular set of expertise and insight that, Wayne explained, includes knowing not only what works, but what doesn’t. As in an ecosystem, a disruption of a particular component in a sprawling chain of connections can affect the health of the whole. Like a living organism, collaboration is a complex system, and in the absence of nourishment—that is, funding—it falls apart and breaks down. As a result, the human capital specific to this community and project with its knowledge of yeast—and especially the collaborative understanding built up around that endeavor—has been lost.

And yet, given all the difficulties the lab had encountered in trying to build a model of yeast, and given that funding for science is never unlimited, and that its outcomes are never predictable, how is it possible to know whether an approach should be abandoned as a dead end or whether it just needs more money and more time to bear fruit?

Wayne fiddled a final pipette into a plastic tray and traipsed toward the analysis machine.

He waited for the graphs to appear. He would repeat this trial three more times before the night was through.

The lab bench next to his sat empty. It had been Mara’s, but above it now hung a paper sign with another name scrawled on it.

Much of Mara’s yeast work had been aimed at understanding how yeast cells grow, which the researchers had hoped might offer insight into how cancer cells grow. Ironically, while Mara was working on those experiments, her own body was growing a flurry of cancer cells. In 2012, Mara discovered she had bowel cancer, which she first conquered, only to become aware of its return in April 2013. It then quickly spread beyond control.

Mara died at the end of May in 2013. Those remaining at the Centre were devastated by her loss—Mara had been such a young, vibrant, and central presence in the community, and she was gone a year after getting sick. The researchers left at the Centre and other colleagues from the university rented a bus so they could all go to her funeral together.

Wayne was dumbfounded by Mara’s absence. When asked what he learned from her and what he had been feeling with her gone, he replied that though she was “very knowledgeable” and he had “worked with her loads,” right now, he “could hardly summarize.”

Without continued and concentrated funds for the Centre, its future is a little uncertain. Not only is a complete model of yeast still out of reach, so, too, are the insights and contributions such a model might hold for cancer research, larger organ models, the improvement of healthcare, and the entire systems biology community.

Determining that yeast was a living organism took about two hundred years—but it also took more than that. While Pasteur may seem like a one-man revolution, he was also part of a collaboration, albeit one across countries and time. His work built on the works of Kützing, Schwann, and Cagniard-Latour, who worked on yeast twenty years before Pasteur and who built their own works on that of Lavoisier, whose work predated theirs by another fifty years and who likely built his work on the research of his contemporary, Fabbroni. Moreover, it took industry investment, government support, the advent of advanced microscopes, and eminent learned men rassling over its essence before yeast was eventually understood as it is now: a fundamental unit of life.

Wayne and Mara, too, are descendents of this yeast work and scientific struggle. But while Pasteur and his contemporaries’ research was directly inspired and validated by the use of yeast in brewing and baking, Wayne and Mara’s lab is not located in a vineyard. The MCISB yeast researchers work instead in a large white office building, in the middle of Manchester, where, using the tools of modern molecular biology, they probe, pull apart, and map yeast. The small-scale science of Pasteur’s time has grown big—the distance between research and application widening as science has professionalized and institutionalized over time. The vineyard has been replaced by complex configurations of university-based laboratories, specialized health research institutes, pharmaceutical companies, policymaking bodies, regulatory agencies, funding councils, etc.—within which, researchers of all types and stripes try to organize, mobilize, set up shop, and get to the bench or the computer.

Although the modeling of yeast is certainly related to application, insights about life derived from yeast-modeling will likely take some time to result in anything that concretely and directly helps to cure cancer—because the translation of research from lab bench to patient’s bedside is far from straightforward. Sure, these fundamental investigations into the nature of life bring new knowledge, but what, exactly, yeast will teach us and how that will translate into applications is still a little unclear. In other words, whether the promises of the research will become reality is unknown. This uncertainty is difficult to handle—not only for the scientists performing the experiments, building the computer models, and composing the grants, but also for the pharmaceutical industry representatives and government policymakers making funding decisions.

The fundamental character and exact function of yeast were not understood for a long time; now, we’re struggling to understand yeast’s systematic operations. How long will this struggle take? And do we really understand what’s at stake? Within the dilemma of finite funding resources, how do we figure out what research will eventually translate into practice? Pasteur understood the importance of these issues about the funding of research. In 1878, he wrote, “I beseech you to take interest in these sacred domains so expressively called laboratories. Ask that there be more…for these are the temples of the future, wealth, and well-being. It is here that humanity will grow, strengthen, and improve. Here, humanity will learn to read progress and individual harmony in the works of nature.”

In many ways, this is what our modern yeast devotees are also hoping to discover: not only how yeast may once again be that ideal lab partner and organism that is also key to the next frontier of science, but that our communities, our funding bodies, and our scientific institutions will continue to invest the needed time, infrastructure, and patience into working with yeast and awaiting the next level of discovery this little organism has to offer us.

Niki Vermeulen (nikivermeulen@gmail.com) is a Wellcome Research Fellow in the Center for the History of Science, Technology and Medicine of the University of Manchester (UK). Molly Bain (mollybain@gmail.com), a writer, teacher, and performer, is working on an MFA in nonfiction at the University of Pittsburgh.

Forum

Evidence-driven policy

In “Advancing Evidence-Based Policymaking to Solve Social Problems” (Issues, Fall 2013), Jeffrey B. Liebman has written an informative and thoughtful article on the potential contribution of empirical analysis to the formation of social policy. I particularly commend his recognition that society faces uncertainty when making policy choices and his acknowledgment that learning what works requires a willingness to try out policies that may not succeed.

He writes “If the government or a philanthropy funds 10 promising early childhood interventions and only one succeeds, and that one can be scaled nationwide, then the social benefits of the overall initiative will be immense.” He returns to reinforce this theme at the end of the article, writing “What is needed is a decade in which we make enough serious attempts at developing scalable solutions that, even if the majority of them fail, we still emerge with a set of proven solutions that work.”

Unfortunately, much policy analysis does not exhibit the caution that Liebman displays. My recent book Public Policy in an Uncertain World observes that analysts often suffer from incredible certitude. Exact predictions of policy outcomes are common, and expressions of uncertainty are rare. Yet predictions are often fragile, with conclusions resting on critical unsupported assumptions or on leaps of logic. Thus, the certitude that is frequently expressed in policy analysis often is not credible.

A disturbing feature of recent policy analysis is that many researchers overstate the informativeness of randomized experiments. It has become common to use two of the terms in the Liebman article—”evidence-based policymaking” and “rigorous evaluation methods”—as code words for such experiments. Randomized experiments sometimes enable one to draw credible policy-relevant conclusions. However, there has been a lamentable tendency of researchers to stress the strong internal validity of experiments and downplay the fact that they often have weak external validity. (An analysis is said to have internal validity if its findings about the study population are credible. It has external validity if one can credibly extrapolate the findings to the real policy problem of interest.)

5

Another manifestation of incredible certitude is that governments produce precise official forecasts of unknown accuracy. A leading case is Congressional Budget Office scoring of the federal debt implications of pending legislation. Scores are not accompanied by measures of uncertainty, even though legislation often proposes complex changes to federal law, whose budgetary implications must be difficult to foresee.

Why do policy analysts express certitude about policy impacts that, in fact, are rather difficult to assess? A proximate answer is that analysts respond to incentives. The scientific community rewards strong, novel findings. The public takes a similar stance, expecting unequivocal policy recommendations. These incentives make it tempting for researchers to maintain assumptions far stronger than they can persuasively defend, in order to draw strong conclusions.

We would be better off we were to face up to the uncertainties that attend policy formation. Some contentious policy debates stem from our failure to admit what we do not know. Credible analysis would make explicit the range of outcomes that a policy might realistically produce. We would do better to acknowledge that we have much to learn than to act as if we already know the truth.

CHARLES F. MANSKI

Board of Trustees Professor in Economics and Fellow of the Institute for Policy Research

Northwestern University

Evanston, Illinois

cfmanski@northwestern.edu

Manski is author of Public Policy in an Uncertain World: Analysis and Decisions (Harvard University Press, 2013).

Model behavior

With “When All Models Are Wrong” (Issues, Winter 2014), Andrea Saltelli and Silvio Funtowicz add to a growing literature of guidance on handling scientific evidence for scientists and policymakers; recent examples include Sutherland, Spiegelhalter, and Burgman’s “Policy: Twenty tips for interpreting scientific claims,” and Chris Tyler’s “Top 20 things scientists need to know about policymaking.” Their particular focus on models is timely as complex issues are of necessity being handled through modeling, prone though models and model users are to misuse and misinterpretation.

Saltelli and Funtowicz provide mercifully few (7, more memorable than 20) “rules,” sensibly presented more as guidance and, in their words, as an adjunct to essential critical vigilance. There is one significant omission; a rule 8 should be “Test models against data”! Rule 1 (clarity) is important in enabling others to understand and gain confidence in a model, although it risks leading to oversimplification; models are used because the world is complex. Rule 3 might more kindly be rephrased as “Detect overprecision”; labeling important economic studies such as the Stern review as “pseudoscience” seems harsh. Although studies of this type can be overoptimistic in terms of what can be said about the future, they can also represent an honest best attempt, within the current state of knowledge (hopefully better than guesswork), rather than a truly pseudoscientific attempt to cloak prejudice in scientific language. Perhaps also, the distinction between prediction and forecasting has not been recognized here; more could also have been made of the policy-valuable role of modeling in exploring scenarios. But these comments should not detract from a useful addition to current guidance.

Alice Aycock

Those visiting New York City’s Park Avenue through July 20th will experience a sort of “creative disruption.” Where one would expect to see only the usual mix of cars, tall buildings, and crowded sidewalks, there will also be larger-than-life white paper-like forms that seem to be blowing down the middle of the street, dancing and lurching in the wind. The sight has even slowed the pace of the city’s infamously harried residents, who cannot resist the invitation to stop and enjoy.

Alice Aycock’s series of seven enormous sculptures in painted aluminum and fiberglass is called “Park Avenue Paper Chase” and stretches from 52nd Street to 66th Street. The forms, inspired by spirals, whirlwinds, and spinning tops, are hardly the normal view on a busy city street. According to Aycock, “I tried to visualize the movement of wind energy as it flowed up and down the avenue creating random whirlpools, touching down here and there and sometimes forming dynamic three-dimensional massing of forms. The sculptural assemblages suggest waves, wind turbulence, turbines, and vortexes of energy…. Much of the energy of the city is invisible. It is the energy of thought and ideas colliding and being transmitted outward. The works are the metaphorical visual residue of the energy of New York City.”

Aycock’s work tends to draw from diverse subjects and ideas ranging from art history to scientific concepts (both current and outdated). The pieces in “Park Avenue Paper Chase” visually reference Russian constructivism while being informed by mathematical phenomena as found in wind and wave currents. Far from forming literal theoretical models, Aycock’s sculptures seem to combine seemingly disjointed ideas together intuitively into forms that make visual sense. Form combined with their placement on Park Avenue work together to disorient the viewer, at least temporarily, to capture the imagination and to challenge perceptions.

Aycock’s art career began in the early 1970s and has included installations at the Museum of Modern Art, San Francisco Art Institute, and the Museum of Contemporary Art, Chicago, as well as installations in many public spaces such as Dulles International Airport, the San Francisco Public Library, and John F. Kennedy International Airport.

JD Talasek

Images courtesy of the artist and Galerie Thomas Schulte and Fine Art Partners, Berlin, Germany. Photos by Dave Rittinger.

7

ALICE AYCOCK, Cyclone Twist (Park Avenue Paper Chase), Painted aluminum, 27′ high × 15′ diameter, Edition of 2, 2013. The sculpture is currently installed at 57th Street on Park Avenue.

8

ALICE AYCOCK, Hoop-La (Park Avenue Paper Chase), Painted aluminum and steel, 19′ high × 17′ wide × 24′ long, Edition of 2, 2014. The sculpture is currently installed at 53rd Street on Park Avenue.

It is interesting to consider why such guidance should be necessary at this time. The need emerges from the inadequacies of undergraduate science education, especially in Britain where school and undergraduate courses are so narrowly focused (unlike the continental baccalaureate which at least includes some philosophy). British undergraduates get little training in the philosophy and epistemology of science. We still produce scientists whose conceptions of “fact” and “truth” remain sturdily Logical Positivist, lacking understanding of the provisional, incomplete nature of scientific evidence. Likewise, teaching about the history and sociology of science is unusual. Few learn the skills of accurate scientific communication to nonscientists. These days, science students may learn about industrial applications of science, but few hear about its role in public policy. Many scientists (not just government advisers) appear to misunderstand the relationship between the conclusions they are entitled to draw about real-world problems and the wider issues involved in formulating and testing ideas about how to respond to them. Even respected scientists often put forward purely technocratic “solutions,” betraying ignorance of the social, economic, and ethical dimensions of problems, and thereby devaluing science advice in the eyes of the public and policymakers.

Saltelli and Funtowicz’ helpful checklist contributes to improving this situation, but we need to make radical improvements to the ways we train our young scientists if we are to bridge the science/policy divide more effectively.

MILES PARKER

Centre for Science and Policy

MIKE BITHELL

Department of Geography

University of Cambridge

Cambridge, UK

Wet drones

In “Sea Power in the Robotic Age” (Issues, Winter 2014), Bruce Berkowitz describes an impressive range of features and potential missions for unmanned maritime systems (UMSs). Although he’s rightly concerned with autonomy in UMSs as an ethical and legal issue, most of the global attention has been on autonomy in unmanned aerial vehicles (UAVs). Here’s why we may be focusing on the wrong robots.

The need for autonomy is much more critical for UMSs. UAVs can communicate easily with satellites and ground stations to receive their orders, but it is notoriously difficult to broadcast most communication signals through liquid water. If unmanned underwater vehicles (UUVs), such as robot submarines, need to surface in order to make a communication link, they will give away their position and lose their stealth advantage. Even unmanned surface vehicles (USVs), or robot boats, that already operate above water face greater challenges than UAVs, such as limited line-of-sight control because of a two-dimensional operating plane, heavy marine weather that can interfere with sensing and communications, more obstacles on the water than in the air, and so on.

All this means that there is a compelling need for autonomy in UMSs, more so than in UAVs. And that’s why truly autonomous capabilities will probably emerge first in UMSs. Oceans and seas also are much less active environments than land or air: There are far fewer noncombatants to avoid underwater. Any unknown submarine, for instance, can reasonably be presumed not to be a recreational vehicle operated by an innocent individual. So UMSs don’t need to worry as much about the very difficult issue of distinguishing lawful targets from unlawful ones, unlike the highly dynamic environments in which UAVs and unmanned ground vehicles (UGVs) operate.

Therefore, there are also lower barriers to deploying autonomous systems in the water than in any other battlespace on Earth. Because the marine environment makes up about 70% of Earth’s surface, it makes sense for militaries to develop UMSs. Conflicts are predicted to increase there, for instance, as Arctic ice melts and opens up strategic shipping lanes that nations will compete for.

Of course, UAVs have been getting the lion’s share of global attention. The aftermath images of UAV strikes are violent and visceral. UAVs tend to have sexy/scary names such as Ion Tiger, Banshee, Panther, and Switchblade, while UMSs have more staid and nondescript names such as Seahorse, Scout, Sapphire, and HAUV-3. UUVs also mostly look like standard torpedoes, in contrast to the more foreboding and futuristic (and therefore interesting) profiles of Predator and Reaper UAVs.

For those and other reasons, UMSs have mostly been under the radar in ethics and law. Yet, as Berkowitz suggests, it would benefit both the defense and global communities to address ethics and law issues in this area in advance of an international incident or public outrage—a key lesson from the current backlash against UAVs. Some organizations, such as the Naval Postgraduate School’s CRUSER consortium, are looking at both applications and risk, and we would all do well to support that research.

PATRICK LIN

Visiting Associate Professor

School of Engineering

Stanford University

Stanford, California

Director and Associate Philosophy Professor

Ethics and Emerging Sciences Group

California Polytechnic State University

San Luis Obispo, California

palin@calpoly.edu

Robots aren’t taking your job

Perhaps a better title for “Anticipating a Luddite Revival” (Issues, Spring 2014) might be “Encouraging a Luddite Revival,” for Stuart Elliot significantly overstates the ability of information technology (IT) innovations to automate work. By arguing that as many as 80% of jobs will be eliminated by technology in as soon as two decades, Elliot is inflaming Luddite opposition.

Elliot does attempt to be scholarly in his methodology to predict the scope of technologically based automation. His review of past issues of IT scholarly journals attempts to understand tech trends, while his analysis of occupation skills data (O-NET) attempts to assess what occupations are amenable to automation.

But his analysis is faulty on several levels. First, to say that a software program might be able to mimic some human work functions (e.g., finding words in a text) is completely different than saying that the software can completely replace a job. Many information-based jobs involve a mix of both routine and nonroutine tasks, and although software-enabled tools might be able to help with routine tasks, they have a much harder time with the nonroutine ones.

Second, many jobs are not information-based but involve personal services, and notwithstanding progress in robotics, we are a long, long way away from robots substituting for humans in this area. Robots are not going to drive the fire truck to your house and put out a fire anytime soon.

10

ALICE AYCOCK, Spin-the-Spin (Park Avenue Paper Chase), Painted aluminum, 18′ high × 15′ wide × 20′ long, Edition of 2, 2014. The sculpture is currently installed at 55th Street on Park Avenue.

Moreover, although it’s easy to say that the middle level O-NET tasks “appear to be roughly comparable to the types of tasks now being described in the research literature,” it’s quite another to give actual examples, other than some frequently cited ones such as software-enabled insurance underwriting. In fact, the problem with virtually all of the “robots are taking our jobs” claims is that they suffer from the fallacy of composition. Proponents look at the jobs that are relatively easy to automate (e.g., travel agents) and assume that: (1) these jobs will all be automated quickly, and (2) all or most jobs fit into this category. Neither is true. We still have over half a million bank tellers (with the Bureau of Labor Statistics predicting an increase in the next 10 years), long after the introduction of ATMs. Moreover, most jobs are actually quite hard to automate, such as maintenance and repair workers, massage therapists, cooks, executives, social workers, nursing home aides, and sales reps, to list just a few.

I am somewhat optimistic that this vision of massive automation may in fact come true perhaps by the end of the century, for it would bring increases in living standards (with no change in unemployment rates). But there is little evidence for Elliot’s claim of “a massive transformation in the labor market over the next few decades.” In fact the odds are much higher that U.S. labor productivity growth will clock in well below 3% per year (the highest rate of productivity the United States ever achieved).

ROBERT ATKINSON

President

Information Technology and Innovation Foundation

Washington, DC

ratkinson@itif.org

Climate change on the right

In Washington, every cause becomes a conduit for special-interest solicitation. Causes that demand greater transfers of wealth and power attract more special interests. When these believers of convenience successfully append themselves to the original cause, it compounds and extends the political support. When it comes to loading up a bill this way, existential causes are the best of all and rightfully should be viewed with greatest skepticism. As Steven E. Hayward notes in “Conservatism and Climate Science” (Issues, Spring 2014), the Waxman-Markey bill was a classic example of special-interest politics run amok.

So conservatives are less skeptical about science than they are about scientific justifications for wealth transfers and losses of liberty. Indeed, Yale professor Dan Kahan found, to his surprise, that self-identified Tea Party members scored better than the population average on a standard test of scientific literacy. Climate policy right-fully elicits skepticism from conservatives, although the skepticism is often presented as anti-science.

Climate activists have successfully and thoroughly confused the climate policy debate. They present the argument this way: (1) Carbon dioxide is a greenhouse gas emitted by human activity; (2) human emissions of carbon dioxide will, without question, lead to environmental disasters of unbearable magnitude; and (3) our carbon policy will effectively mitigate these disasters. The implication swallowed by nearly the entire popular press is that point one (which is true) proves points two and three.

In reality, the connections between points one and two and between points two and three are chains made up of very weak links. The science is so unsettled that even the Intergovernmental Panel on Climate Change (IPCC) cannot choose from among the scores of models it uses to project warming. It hardly matters; the accelerating warming trends that all of them predict are not present in the data (in fact the trend has gone flat for 15 years), nor do the data show any increase in extreme weather from the modest warming of the past century. This provokes the IPCC to argue that the models have not been proven wrong (because their projections are so foggy as to include possible decades of cooling) and that with certain assumptions, some of them predict really bad outcomes.

Not wanting to incur trillions of dollars of economic damage based on these models is not anti-science, which brings us to point three.

Virtually everyone agrees that none of the carbon policies offered to date will have more than a trivial impact on world temperature, even if the worst-case scenarios prove true. So the argument for the policies degenerates to a world of tipping points and climate roulette wheels—there is a chance that this small change will occur at a critical tipping point. That is, the trillions we spend might remove the straw that would break the back of the camel carrying the most valuable cargo. With any other straw or any other camel there would be no impact.

So however unscientific it may seem in the contrived all-or-none climate debate, conservatives are on solid ground to be skeptical.

DAVID W. KREUTZER

Research Fellow in Energy Economics and Climate Change

The Heritage Foundation

Washington, DC

David.kreutzer@heritage.org

Steven E. Hayward claims that the best framework for addressing large-scale disruptions, including climate change, is building adaptive resiliency. If so, why does he not present some examples of what he has in mind, after dismissing building seawalls, moving elsewhere, or installing more air conditioners as defeatist? What is truly defeatist is prioritizing adaptation over prevention, i.e., the reduction of greenhouse gas emissions.

Others concerned with climate change have a different view. As economist William Nordhaus has pointed out (The Climate Casino, Yale University Press, 2013), in areas heavily managed by humans, such as health care and agriculture, adaptation can be effective and is necessary, but some of the most serious dangers, such as ocean acidification and losses of biodiversity, are unmanageable and require mitigation of emissions if humanity is to avoid catastrophe. This two-pronged response combines cutting back emissions with reactively adapting to those we fail to cut back.

Hayward does admit that our capacity to respond to likely “tipping points” is doubtful. Why then does he not see that mitigation is vital and must be pursued far more vigorously than in the past? Nordhaus has estimated that the cost of not exceeding a temperature increase of 2°C might be 1 to 2% of world income if worldwide cooperation could be assured. Surely that is not too high a price for insuring the continuance of human society as we know it!

11

ALICE AYCOCK, Twin Vortexes (Park Avenue Paper Chase), Painted aluminum, 12′ high × 12′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 54th Street on Park Avenue.

12

ALICE AYCOCK, Maelstrom (Park Avenue Paper Chase), Painted aluminum 12′ high × 16′ wide × 67′ long, Edition of 2, 2014. The sculpture is currently installed between 52nd and 53rd Streets on Park Avenue. Detail opposite.

Hayward states that “Conservative skepticism is less about science per se than its claims to usefulness in the policy realm.” But climate change is a policy issue that science has placed before the nations of the world, and science clearly has a useful role in the policy response, both through the technologies of emissions control and by adaptive agriculture and public health measures. To rely chiefly on “adaptive resiliency” and not have a major role for emissions control is to tie one hand behind one’s back.

EVILLE GORHAM

Regents’ Professor of Ecology Emeritus

University of Minnesota

Minneapolis, Minnesota

Steven E. Hayward should be commended for his thoughtful article, in which he explains why political conservatives do not want to confront the challenge of climate change. Nevertheless, the article did not increase my sympathy for the conservative position, and I would like to explain why.

Hayward begins by explaining why appeals to scientific authority alienate conservatives. Science is not an endeavor that anyone must accept on the word of authority. People should feel free to examine and question scientific work and results. But it doesn’t make sense to criticize science without making an effort to thoroughly understand the science first: the hypotheses together with the experiments that attempt to prove them. What too many conservatives do is deny the science out of hand without understanding it well, dismissing it because of a few superficial objections. I read of one skeptic who dismissed global warming because water vapor is a more powerful greenhouse gas than carbon dioxide. That’s true, but someone who thinks through the argument will understand why that doesn’t make carbon dioxide emissions less of a problem. Climate change is a challenge that we may not agree on how to confront, but that doesn’t excuse any of us from thinking it through carefully.

Hayward points out that “the climate enterprise is the largest crossroads of physical and social science ever contemplated.” That may be true, but conservatives don’t separate the two, and they should. If the science is wrong, they need to explain how the data is flawed, or the theory has not taken into account all the variables, or the statistical analysis is incorrect, or that the data admits of more than one interpretation. If the policy prescriptions are wrong, then they need to explain why these prescriptions will not obtain the results we seek or how they will cost more than the benefits they will provide. Then they need to come up with better alternatives. But too many conservatives don’t separate the science from policy, they conflate the two together. They accuse the scientists of being liberals, and then they won’t consider either the science or the policy. That’s just wrong.

Hayward further explains that conservatives “doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power.” I agree with that, but that shouldn’t stop us from trying to prevent serious problems. Hayward’s statement is a powerful argument for caution, but policy often has unintended consequences, and when we’re faced with a threat, we act. We didn’t understand all consequences of entering World War II, building the atomic bomb, passing the Civil Rights Act, inventing Social Security, or going to war in Afghanistan, but we did them because we thought we had to. Then we dealt with the consequences as best we could. Climate change should be no different.

The weakest part of Hayward’s article is his charge that “the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction.” Now I’m not sure that scientists are the monolithic block Hayward makes then out to be (can he point to a poll?). But even if it is true, it is entirely irrelevant. Scientific work always deserves to be evaluated on its own merits, regardless of whatever personal leanings the investigators might have. Good scientific work is objective and verifiable, and if the investigators are allowing their work to be influenced by their personal biases, that should come out in review, especially if many scientific studies of the same phenomenon are being evaluated. The political leanings of the investigators are a very bad reason for ignoring their work.

13

Just a couple of other points. Hayward states that “Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution to be attacked with the typical emissions control policies,” but it is hard to see how the problem of greenhouse gas concentrations in the atmosphere can be resolved any other way. We can’t get a handle on global warming unless we find a way to limit emissions of greenhouse gases (or counterbalance the emissions with sequestration, which will take just as much effort). Emissions control is not just a tactic, it is a central goal, just like fighting terrorism and curing cancer are central goals. We might fail to achieve them, but that shouldn’t happen because of lack of trying. We need to be patient and persevere. If we environmentalists are correct, the evidence will mount, and public opinion will eventually side with us. By beginning to work on emissions control now, we will all be in a better position to move quickly when the political winds shift in our favor.

Hayward’s alternative to an aggressive climate policy is what he calls “building adaptive resiliency,” but he is very vague about what that means. Does he mean that individuals and companies should adapt to climate change on their own, or that governments need to promote resiliency; but if so, how? The point of environmentalists is that even if we are able to adapt to climate change without large loss of life and property, it will be far more expensive then than if we take direct measures to confront the source of the problem—carbon emissions—now. And we really don’t have much time. If the climate scientists are correct, we have only 50 to 100 years before some of the worst effects of climate change start hitting us. Considering the size and complexity of the problem and the degree of cooperation that any serious effort to address climate change will require from all levels of governments, companies, and private individuals, that’s not a lot of time. We had better get moving.

14

ALICE AYCOCK, Waltzing Matilda (Park Avenue Paper Chase), Reinforced fiberglass, 15′ high × 15′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 56th Street on Park Avenue.

Hayward earns our gratitude for helping us better understand how conservatives feel on this important issue. Nevertheless, the conservative movement is full of bright and intelligent people who could be making many valuable contributions and ideas to the climate debate, and they’re not. That’s a real shame.

MICHAEL H. KLEIN

Brooklyn, New York

mhkblogs@gmail.com

Does U.S. science still rule?

In “Is U.S. Science in Decline?” (Issues, Spring 2014), Yu Xie offers a glimpse into the plight of early-career scientists. The gravity of the situation cannot be understated. Many young researchers have become increasingly disillusioned and frustrated about their career trajectory because of declining federal support for basic scientific research.

Apprehension among early-career scientists is rooted in the current fiscal environment. In fiscal year 2013, the National Institutes of Health (NIH) funded fewer than 700 grants due to sequestration: the across-the-board spending cuts that remain an albatross for the entire research community. To put this into context, the success rate for grant applications is now one out of six and may worsen if sequestration is not eliminated. This has left many young researchers rethinking their career prospects. In 1980, close to 18% of all principal investigators (PIs) were age 36 and under, and the percentage has fallen to about 3% in recent years. NIH Director Francis Collins, has said that the federal funding climate for research “keeps me awake at night,” and echoed this sentiment at a recent congressional hearing: “I am worried that the current financial squeeze is putting them [early-career scientists] in particular jeopardy in trying to get their labs started, in trying to take on things that are innovative and risky.” Samantha White, a public policy fellow for Research!America, a nonprofit advocacy alliance, sums up her former career path as a researcher in two words: “anxiety-provoking.” She left bench work temporarily to support research in the policy arena, describing to lawmakers the importance of a strong investment in basic research.

The funding squeeze has left scientists with limited resources and many of them, like White, pursuing other avenues. More than half of academic faculty members and PIs say they have turned away promising new researchers since 2010 because of the minimal growth of the federal science agencies’ budgets, and nearly 80% of scientists say they spend more time writing grant applications, according to a survey by the American Society for Biochemistry and Molecular Biology. Collins lamented this fact in a USA Today article: “We are throwing away probably half of the innovative, talented research proposals that the nation’s finest biomedical community has produced,” he said. “Particularly for young scientists, they are now beginning to wonder if they are in the wrong field. We have a serious risk of losing the most important resource that we have, which is this brain trust, the talent and the creative energies of this generation of scientists.”

U.S. Nobel laureates relied on government funding early in their careers to advance research that helped us gain a better understanding of how to treat and prevent deadly diseases. We could be squandering opportunities for the next generation of U.S. laureates if policymakers fail to make a stronger investment in medical research and innovation.

MIKE COBURN

Chief Operating Officer

Research!America

Alexandria, Virginia

www.researchamerica.org

@ResearchAmerica

Chinese aspirations

Junbo Yu’s article “The Politics Behind China’s Quest for Nobel Prizes” (Issues, Spring 2014) tells an interesting story about how China is applying its strategy for winning Olympic gold to science policy. The story might fit well with the Western stereotype of the Communist bureaucrats, but the real politics of it are more complex and nuanced.

First of all, let’s get the story straight. The article refers to a recent “10,000 Talent” program run by the organizational department of the Chinese Communist Party. It is a major talent development program aimed at selecting and supporting domestic talents in various areas, including scientists, young scholars, entrepreneurs, best teachers, and skilled engineers. The six scientists referred to in Yu’s article were among the 277 people who were identified as the first cohort of the talents. Although there were indeed media reports referring to these scientists as candidates to charge for Nobel Prizes, they were quickly dispelled as media hype and misunderstanding by relevant officials. For example, three of the first six scientists were in research areas that have no relevance to Nobel Prizes at all.

The real political issue is how to balance between talent trained overseas and talent trained domestically. In 2008, China initiated a “1,000 Talent” program aimed at attracting highly skilled Chinese living overseas to return to China. It was estimated that between 1978 and 2011, more than 2 million Chinese students went abroad to study and that only 36.5% of them returned. Although the 1,000 Talent program has been successful in attracting outstanding scholars back to China, it has also generated some unintended consequences.

As part of the recruitment package, the program will give each returnee a one million RMB (equivalent to $160,000) settlement payment. Many of them can also get special grants for research and a salary comparable to what they were paid overseas. This preferential treatment has generated some concern and resentment among those who were trained domestically. They have to compete hard for research grants, and their salaries are ridiculously low. In an Internet survey conducted in China by Yale University, many people expressed their support for the government to attract people to come back from overseas, but felt it was unfair to give people benefits based on where they were trained rather than on how they perform.

In response to these criticisms and concerns, the 10,000 Talent program was developed as a way to focus on domestically trained talent. Instead of going through a lengthy selection process, the program tried to integrate various existing talent programs run by various government agencies.

Although these programs might be useful in the short run, the best way to attract and keep talented people is to create an open, fair, and nurturing environment for people who love research, and to pay them adequately so that they can have a decent life. It is simple and doable in China now, and in the long run it will be much more effective than the 1,000 Talent and 10,000 Talent programs.

LAN XUE

Professor and Dean

School of Public Policy and Management

Tsinghua University

Beijing, China

xue.lan@tsinghua.edu.cn

The idea of China winning a Nobel Prize in science may seem like a stretch to many who understand the critical success factors that drive world-class research at the scientific frontier. Although new reforms in the science and technology (S&T) sector have been introduced since September 2012, the Chinese R&D system continues to be beset by many deep-seated organizational and management issues that need to be overcome if real progress is to be possible. Nonetheless, Junbo Yu’s article reminds us that sometimes there is more to the scientific endeavor than just the work of a select number of scientists toiling away in some well-equipped laboratory.

If we take into account the full array of drivers underlying China’s desire to have a native son win one of these prestigious prizes, we must place national will and determination at the top of the key factors that will determine Chinese success. Yu’s analysis helps remind us just how important national prestige and pride are as factors motivating the behavior of the People’s Republic of China’s leaders in terms of investment and the commitment of financial resources. At times, I wonder whether we here in the United States should pay a bit more deference to these normative imperatives. In a world where competition has become more intense and the asymmetries of the past are giving way to greater parity in many S&T fields, becoming excited about the idea of “winning” or forging a sense of “national purpose” may not be as distorted as perhaps suggested in the article. Too many Americans take for granted the nation’s continued dominance in scientific and technological affairs, when all of the signals are pointing in the opposite direction. In sports, we applaud the team that is able to muster the team spirit and determination to carve out a key victory. Why not in S&T?

That said, where the Chinese leadership may have gone astray in its some-what overheated enthusiasm for securing a Chinese Nobel Prize is its failure to recognize that globalization of the innovation process has made the so-called “scientific Lone Ranger” an obsolete idea. Most innovation efforts today are both transnational and collaborative in nature. China’s future success in terms of S&T advancement will be just as dependent on China’s ability to become a more collaborative nation as it will on its own home-grown efforts.

Certainly, strengthening indigenous innovation in China is an appropriate national objective, but as the landscape of global innovation continues to shift away from individual nation-states and more in the direction of cross-border, cross-functional networks of R&D cooperation, the path to the Nobel Prize for China may be in a different direction than China seems to have chosen. Remaining highly globally engaged and firmly embedded in the norms and values that drive successful collaborative outcomes will prove to be a faster path to the Nobel Prize for Chinese scientists than will working largely from a narrow national perspective. And it also may be the best path for raising the stature and enhancing the credibility of the current regime on the international stage.

DENIS SIMON

Senior Adviser for China & Global Affairs

Foundation Professor of Contemporary

Chinese Affairs

Arizona State University

Tempe, Arizona

denis.simon@asu.edu

Junbo Yu raises a number of interesting, but complex, questions about the current state of science, and science policy, in China. As a reflection of a broad cultural nationalism, many Chinese see the quest for Nobel Prizes in science and medicine as a worthy major national project. For a regime seeking to enhance legitimacy through appeals to nationalism, the use of policy tools by the Party/state to promote this quest is understandable. Although understandable, it may also be misguided. China has many bright, productive scientists who, in spite of the problems of China’s research culture noted by Yu, are capable of Nobel-quality work. They will be recognized with prizes sooner or later, but this will result from the qualities of mind and habit of individual researchers, not national strategy.

The focus on Nobel Prizes detracts from broader questions about scientific development in 21st century China involving tensions between principles of scientific universalism and the social and cultural “shaping” of science and technology in the Chinese setting. The rapid enhancement of China’s scientific and technological capabilities in recent years has occurred in a context where many of the internationally accepted norms of scientific practice have not always been observed. Nevertheless, through international benchmarking, serious science planning, centralized resource mobilization, the abundance of scientific labor available for research services, and other factors, much progress has been made by following a distinctive “Chinese way” of scientific and technological development. The sustainability of this Chinese way, however, is now at issue, as is its normative power for others

Over the past three decades, China has faced a challenge of ensuring that policy and institutional design are kept in phase with a rapidly changing innovation system. Overall, policy adjustments and institutional innovations have been quite successful in allowing China to pass through a series of catch-up stages. However, the challenge of moving beyond catch-up now looms large, especially with regard to the development of policies and institutions to support world class basic research, as Yu suggests. Misapprehension in the minds of political leaders and bureaucrats about the nature of research and innovation in the 21st century may also add to the challenge. The common conflation of “science” and “technology” in policy discourse, as seen in the Chinese term keji (best translated as “scitech”), is indicative. So too is the belief that scientific and technological development remains in essence a national project, mainly serving national political needs, including ultimately national pride and Party legitimacy, as Yu points out.

17

ALICE AYCOCK, Twister 12 feet (Park Avenue Paper Chase), Aluminum, 12′ high × 12′ diameter, Unique edition, 2014. The sculpture is currently installed at 66th Street on Park Avenue.

In 2006, China launched its 15-year “Medium to Long-Term Plan for Scientific and Technological Development” (MLP). Over the past year, the Ministry of Science and Technology has been conducting an extensive midterm evaluation of the Plan. At the same time, as recognized in the ambitious reform agenda of the new Xi Jingping government, the need for significant reforms in the nation’s innovation system, largely overlooked in 2006, have become more evident. There is thus a certain disconnect between the significant resource commitments entailed in the launching of the ambitious MLP and the reality that many of the institutions required for the successful implementation of the plan may not be suitable to the task. The fact that many of the policy assumptions about the role of government in the innovation system that prevailed in 2006 seemingly are not shared by the current government suggests that the politics of Chinese science involve much more than the Nobel Prize quest.

RICHARD P. (PETE) SUTTMEIER

Professor of Political Science, Emeritus

University of Oregon

Eugene, Oregon

petesutt@uoregon.edu

Although it is intriguing in linking the production of a homegrown Nobel science laureate to the legitimacy of the Chinese Communist Party, Junbo Yu’s piece just recasts what I indicated 10 years ago. In a paper entitled “Chinese Science and the ‘Nobel Prize Complex’,” published in Minerva in 2004, I argued that China’s enthusiasm for a Nobel Prize in science since the turn of the century reflects the motivations of China’s political as well as scientific leadership. But “various measures have failed to bring home those who are of the calibre needed to win the Nobel Prize. Yet, unless this happens, it will be a serious blow to China’s political leadership. …So to win a ‘home-grown’ Nobel Prize becomes a face-saving gesture.” “This Nobel-driven enthusiasm has also become part of China’s resurgent nationalism, as with winning the right to host the Olympics,” an analogy also alluded to by Yu.

In a follow-up, “The Universal Values of Science and China’s Nobel Prize Pursuit,” forthcoming again in Minerva, I point out that in China, “science, including the pursuit of the Nobel Prize, is more a pragmatic means to achieve the ends of the political leadership—the national pride in this case—than an institution laden with values that govern its practices.”

As we know, in rewarding those who confer the “greatest benefit on mankind,” the Nobel Prize in science embodies an appreciation and celebration of not merely breakthroughs, discoveries, and creativity, but a universal set of values that are shared and practiced by scientists regardless of nationality or culture.

These core values of truth-seeking, integrity, intellectual curiosity, the challenging of authority, and above all, freedom of inquiry are shared by scientists all over the world. It is recognition of these values that could lead to the findings that may one day land their finders a Nobel Prize.

China’s embrace of science dates back only to the May Fourth Demonstrations in 1919, when scholars, disillusioned with the direction of the new Chinese republic after the fall of the Qing Dynasty, called for a move away from traditional Chinese culture to Western ideals; or as they termed it, a rejection of Mr. Confucius and the acceptance of Mr. Science and Mr. Democracy.

However, these concepts of science and democracy differed markedly from those advocated in the West and were used primarily as vehicles to attack Confucianism. The science championed during the May Fourth move-ment was celebrated not for its Enlightenment values but for its pragmatism, its usefulness.

Francis Bacon’s maxim that “knowledge is power” ran right through Mao Zedong’s view of science after the founding of the People’s Republic in 1949. Science and technology were considered as integral components of nation-building; leading academics contributed their knowledge for the sole purpose of modernizing industry, agriculture, and national defense.

The notion of saving the nation through science during the Nationalist regime has translated into current Communist government policies of “revitalizing the nation with science, technology, and education” and “strengthening the nation through talent.” A recent report by the innovation-promotion organization Nesta characterized China as “an absorptive state,” adding practical value to existing foreign technologies rather than creating new technologies of its own.

This materialistic emphasis reflects the use of science as a means to a political end to make China powerful and prosperous. Rather than arbitrarily picking possible Nobel Prize winners, the Chinese leadership would do well to apply the core values of science to the nurturing of its next generation of scientists. Only when it abandons cold-blooded pragmatism for a value-driven approach to science can it hope to win a coveted Nobel Prize and ascend to real superpower status.

Also, winning a Nobel Prize is completely different from winning a gold medal at the Olympics. Until the creation of an environment conducive to first-rate research and nurturing talent, which cannot be achieved through top-down planning, mobilization, and concentration of resources (the hall-marks of China’s state-sponsored sports program), this Nobel pursuit will continue to vex the Chinese for many years to come.

CONG CAO

Associate Professor and Reader

School of Contemporary Chinese Studies

University of Nottingham

Nottingham, UK

cong.cao@nottingham.ac.uk

The New Visible Hand: Understanding Today’s R&D Management

Perspectives: Rethinking “Science” Communication

CRAIG BOARDMAN

The New Visible Hand: Understanding Today’s R&D Management

Recent decades have seen dramatic if not revolutionary changes in the organization and management of knowledge creation and technology development in U.S. universities. Market demands and public values conjointly influence and in many cases supersede the disciplinary interests of academic researchers in guiding scientific and technological inquiry toward social and economic ends. The nation is developing new institutions to convene diverse sets of actors, including scientists and engineers from different disciplines, institutions, and economic sectors, to focus attention and resources on scientific and technological innovation (STI). These new institutions have materialized in a number of organizational forms, including but not limited to national technology initiatives, science parks, technology incubators, cooperative research centers, proof-of-concept centers, innovation networks, and any number of what the innovation ecosystems literature refers to generically (and in most cases secondarily) as “bridging institutions.”

The proliferation of bridging institutions on U.S. campuses has been met with a somewhat bifurcated response. Critics worry that this new purpose will detract from the educational mission of universities; advocates see an opportunity for universities to make an additional contribution to the nation’s well-being. The evidence so far indicates that bridging institutions on U.S. campuses have not diminished either the educational or knowledge-creation activities. Bridging institutions on U.S. campuses complement rather than substitute for traditional university missions and over time may prove critical pivot points in the U.S. innovation ecosystem.

The growth of bridging institutions is a manifestation of two larger societal trends. The first is that the source of U.S. global competitive advantage in STI is moving away from a simple superiority in certain types of R&D to a need to effectively and strategically manage the output of R&D and integrate it more rapidly into the economy through bridging institutions. The second is the need to move beyond the perennial research policy question of whether or not the STI process is linear, to tackle the more complex problem of how to manage the interweaving of all aspects of STI.

The visible hand

This article’s title harkens back to Alfred Chandler’s landmark book The Visible Hand: The Managerial Revolution in U.S. Business. In that book, Chandler makes the case that the proliferation of the modern multiunit business enterprise was an institutional response to the rapid pace of technological innovation that came with industrialization and increased consumer demand. For Chandler, what was revolutionary was the emergence of management as a key factor of production for U.S. businesses.

Similarly, the proliferation of bridging institutions on U.S. campuses has been an institutional response to the increasing complexity of STI and also to public demand for problem-focused R&D with tangible returns on public research investments. As a result, U.S. departments and agencies supporting intramural and extramural R&D are now very much focused on establishing bridging institutions—and in the case of proof-of-concept centers, bridging institutions for bridging institutions—involving experts from numerous scientific and engineering disciplines from academia, business, and government.

All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

To name just a few, the National Science Foundation (NSF) has created multiple cooperative research center programs and recently added the I-Corps program for establishing regional networks for STI. The Department of Energy (DOE) has its Energy Frontier Research Centers and Energy Innovation Hubs. The National Institutes of Health (NIH) have Translational Research Centers and also what they refer to as “team science.” The Obama administration has its Institutes for Manufacturing Innovation. But this is only a tiny sample. The Research Centers Directory counts more than 8,000 organized research units for STI in the United States and Canada, and over 16,000 worldwide. This total includes many traditional departmental labs, where management is not as critical a factor, but a very large number are bridging institutions created to address management concerns.

The analogy between Chandler’s observations about U.S. business practices and the proliferation of bridging institutions on U.S. campuses is not perfect. Whereas Chandler’s emphasis on management in business had more to do with the efficient production and distribution of routine and standard consumer goods and services, the proliferation of bridging institutions on U.S. campuses has had more to do with effective and commercially viable (versus efficient) knowledge creation and technology development, which cannot be routinized by way of management in the same way as can, say, automobile manufacturing.

Nevertheless, management—albeit a less formal kind of management than that Chandler examines—is now undeniably a key factor of production for STI on U.S. campuses. Many nations are catching up with the United States in the percentage of their gross domestic product devoted to R&D, so that R&D alone will not be sufficient to sustain U.S. leadership. The promotion of organizational cultures enabling bridging institutions to strategically manage social network ties among diverse sets of scientists and engineers toward coordinated problem-solving is what will help the United States maintain global competitive advantage in STI.

Historically, U.S. research policy has focused on two things with regard to universities to help ensure the U.S. status as the global STI hegemon. First, it has made sure that U.S. universities have had all the usual “factors of production” for STI, e.g., funding, technology, critical materials, infrastructure, and the best and the brightest in terms of human capital. Second, U.S. research policy has encouraged university R&D in applied fields by, for example, allowing universities to obtain intellectual property rights emerging from publicly funded R&D. In the past, then, an underlying assumption of U.S. research policy was that universities are capable of and willing to conduct problem-focused R&D and to bring the fruits of that research to market if given the funds and capital to do the R&D, as well as ownership of any commercial outputs.

But U.S. research policy regarding universities has been imitated abroad, and for this reason, among others, many countries have closed the STI gap with the United States, at least in particular technology areas. One need read only one or both of the National Academies’ Gathering Storm volumes to learn that the U.S. is now on a more level playing field with China, Japan, South Korea, and the European Union in terms of R&D spending in universities, academic publications and publication quality, academic patents and patent quality, doctorate production, and market share in particular technology areas. Quibbles with the evidentiary bases of the Gathering Storm volumes notwithstanding, there is little arguing that the United States faces increased competition in STI from abroad.

Although the usual factors of production for STI and property rights should remain components of U.S. research policy, these are no longer adequate to sustain U.S. competitive advantage. Current and future U.S. research policy for universities must emphasize factors of production for STI that are less easily imitated, namely organizational cultures in bridging institutions that are conducive to coordinated problem-solving. An underlying assumption of U.S. research should be that universities for the most part cannot or will not go it alone commercially even if given the funds, capital, and property rights to do so (there are exceptions, of course), but rather that they are more likely to navigate the “valley of death” in conjunction with businesses, government, and other universities.

Encouraging cross-sector, inter-institutional R&D in the national interest must become a major component of U.S. research policy for universities, and bridging institutions must play a central role. Anecdotal reports suggest that bridging institutions differ widely in their effectiveness, but one of the challenges facing the nation is to better understand the role that management plays in the success of bridging institutions. Calling something a bridging institution does not guarantee that it will make a significant contribution to meeting STI goals.

The edge of the future

The difference between the historic factors of production for STI discussed above and organizational cultures in bridging institutions is that the former are static, simple, and easy to imitate, whereas the latter are dynamic, complex, and difficult to observe, much less copy. This is no original insight. The business literature made this case originally in the 1980s and 1990s. A firm’s intangible assets, its organizational culture and the tacit norms and expectations for organizational behavior that this entails, can be and oftentimes are a source of competitive advantage because they are difficult to measure and thus hard for competing firms to emulate.

University leaders and scholars have recognized that bridging institutions on U.S. campuses can be challenging to organize and manage and that the ingredients for an effective organizational culture are still a mystery. There is probably as much literature on the management challenges of bridging institutions as there is on their performance. Whereas the management of university faculty in traditional academic departments is commonly referred to as “herding cats,” coordinating faculty from different disciplines and universities, over whom bridging institutions have no line authority, to work together and also to cooperate with industry and government is akin to herding feral cats.

But beyond this we know next to nothing about the organizational cultures of bridging institutions. The cooperative research centers and other types of bridging institutions established by the NSF, DOE, NIH, and other agencies are most often evaluated for their knowledge and technology outcomes and, increasingly, for their social and economic impact, but seldom have research and evaluation focused on what’s inside the black box. All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

Developing an understanding of organizational cultures in bridging institutions is important not just because these can be relatively tacit and difficult to imitate, but additionally because other, more formal aspects of the management of bridging institutions are less manipulable. Unlike Chandler’s emphasis on formal structures and authorities in U.S. businesses, bridging institutions do not have many layers of hierarchy, nor do they have centralized decisionmaking. As organizations focused on new knowledge creation and technology development, bridging institutions typically are flat and decentralized, and therefore vary much more culturally and informally than structurally.

There are frameworks for deducing the organizational cultures of bridging institutions. One is the competing values framework developed by Kim Cameron and Robert Quinn. Another is organizational economics’ emphasis on informal mechanisms such as resource interdependencies and goal congruence. A third framework is the organizational capital approach from strategic human resources management. These frameworks have been applied in the business literature to explore the differences between Silicon Valley and Route 128 microcomputer companies, and they can be adapted for use in comparing the less formal structures of bridging institutions.

What’s more, U.S. research policy must take into account how organizational cultures in bridging institutions interact with “best practices.” We know that in some instances, specific formalized practices are associated with successful STI in bridging institutions, but in many other cases, these same practices are followed in unsuccessful institutions. For bridging institutions, best practices may be best only in combination with particular types of organizational culture.

Inside the black box

The overarching question that research policy scholars and practitioners should address is what organizational cultures lead to different types of STI in different types of bridging institutions. Most research on bridging institutions emphasizes management challenges and best practices, and the literature on organizational culture is limited. We need to address in a systematic fashion how organizational cultures operate to coordinate diverse sets of scientists and engineers toward coordinated problem-solving.

Specifically, research policy scholars and practitioners should address variation across the “clan” type of organizational culture in bridging institutions. To the general organizational scholar, all bridging institutions have the same culture: decentralized and nonhierarchical. But to research policy scholars and practitioners, there are important differences in the organization and management of what essentially amounts to collectives of highly educated volunteers. How is it that some bridging institutions elicit tremendous contributions from academic faculty and industry researchers, whereas others do not? What aspects of bridging institutions explain what enables academic researchers to work with private companies, spin off their own companies, and/or patent?

These questions point to related questions about different types of bridging institutions. There are research centers emphasizing university/industry interactions for new and existing industries, university technology incubators and proof-of-concept centers focused on business model development and venture capital, regional network nodes for STI, and university science parks co-locating startups and university faculty. Which of these bridging institutions are most appropriate for which sorts of STI? When should bridging institutions be interdisciplinary, cross-sectoral, or both? Are the different types of bridging institutions complements or substitutes for navigating the “valley of death?”

Research policy scholars and practitioners have their work cut out for them. There are no general data tracking cultural heterogeneity across bridging institutions. What data do exist, such as the Research Centers Directory compiled by Gale Research/Cengage Learning, track only the most basic organizational features. Other approaches such as the science of team science hold more promise, though much of this work emphasizes best practices and does not address organizational culture systematically. Research policy scholars and practitioners must develop new data sets that track the intangible cultural aspects of bridging institutions and connect these data to publicly available outcomes data for new knowledge creation, technology development, and workforce development.

Developing systematic understanding of bridging institutions is fundamental to U.S. competitiveness in STI. It is fundamental because bridging institutions are where the rubber hits the road in the U.S. innovation ecosystem. Bridging institutions provide forums for our nation’s top research universities, firms, and government agencies to exchange ideas, engage in coordinated problem solving, and in turn create new knowledge and develop new technologies addressing social and economic problems.

Developing systematic understanding of bridging institutions will be challenging because they are similar on the surface but different in important ways that are difficult to detect. During the 1980s, scholars identified striking differences in the organizational cultures of Silicon Valley and Route 128 microcomputing companies. Today, most bridging institutions follow a similar decentralized model for decisionmaking, with few formalized structures and authorities, yet they can differ widely in performance.

The most important variation across bridging institutions is to be found in the intangible, difficult–to-imitate qualities that allow for (or preclude) the coordination of diverse sets of scientists and engineers from across disciplines, institutions, and sectors. But this does not mean that scholars and practitioners should ignore the structural aspects of bridging institutions. In some cases, bridging institutions may exercise line authority over academic faculty (such as faculty with joint appointments), and these organizations may (or may not) outperform similar bridging institutions that do not exercise line authority.


Craig Boardman () is associate director of the Battelle Center for Science and Technology Policy in the John Glenn School of Public Affairs at Ohio State University.


Eagle

Eagle

GREGORY BENFORD

EAGLE

The long, fat freighter glided into the harbor at late morning—not the best time for a woman who had to keep out of sight.

The sun slowly slid up the sky as tugboats drew them into Anchorage. The tank ship, a big sectioned VLCC, was like an elephant ballerina on the stage of a slate-blue sea, attended by tiny dancing tugs.

Now off duty, Elinor watched the pilot bring them in past the Nikiski Narrows and slip into a long pier with gantries like skeletal arms snaking down, the big pump pipes attached. They were ready for the hydrogen sulfide to flow. The ground crew looked anxious, scurrying around, hooting and shouting. They were behind schedule.

Inside, she felt steady, ready to destroy all this evil stupidity.

She picked up her duffel bag, banged a hatch shut, and walked down to the shore desk. Pier teams in gasworkers’ masks were hooking up pumps to offload and even the faint rotten egg stink of the hydrogen sulfide made her hold her breath. The Bursar checked her out, reminding her to be back within 28 hours. She nodded respectfully, and her maritime ID worked at the gangplank checkpoint without a second glance. The burly guy there said something about hitting the bars and she wrinkled her nose. “For breakfast?”

“I seen it, ma’am,” he said, and winked.

She ignored the other crew, solid merchant marine types. She had only used her old engineer’s rating to get on this freighter, not to strike up the chords of the Seamen’s Association song.

She hit the pier and boarded the shuttle to town, jostling onto the bus, anonymous among boat crews eager to use every second of shore time. Just as she’d thought, this was proving the best way to get in under the security perimeter. No airline manifest, no Homeland Security ID checks. In the unloading, nobody noticed her, with her watch cap pulled down and baggy jeans. No easy way to even tell she was a woman.

Now to find a suitably dingy hotel. She avoided central Anchorage and kept to the shoreline, where small hotels from the TwenCen still did business. At a likely one on Sixth Avenue, the desk clerk told her there were no rooms left.

“With all the commotion at Elmendorf, ever’ damn billet in town’s packed,” the grizzled guy behind the counter said.

She looked out the dirty window, pointed. “What’s that?”

“Aw, that bus? Well, we’re gettin’ that ready to rent, but—”

“How about half price?”

“You don’t want to be sleeping in that—”

“Let me have it,” she said, slapping down a $50 bill.

“Uh, well.” He peered at her. “The owner said—”

“Show it to me.”

She got him down to $25 when she saw that it really was a “retired bus.” Something about it she liked, and no cops would think of looking in the faded yellow wreck. It had obviously fallen on hard times after it had served the school system.

It held a jumble of furniture, apparently to give it a vaguely homelike air. The driver’s seat and all else were gone, leaving holes in the floor. The rest was an odd mix of haste and taste. A walnut Victorian love seat with a medallion backrest held the center, along with a lumpy bed. Sagging upholstery and frayed cloth, cracked leather, worn wood, chipped veneer, a radio with the knobs askew, a patched-in shower closet, and an enamel basin toilet illuminated with a warped lamp completed the sad tableau. A generator chugged outside as a clunky gas heater wheezed. Authentic, in its way.

Restful, too. She pulled on latex gloves the moment the clerk left, and took a nap, knowing she would not soon sleep again. No tension, no doubts. She was asleep in minutes.

Time for the recon. At the rental place she’d booked, she picked up the wastefully big Ford SUV. A hybrid, though. No problem with the credit card, which looked fine at first use, then erased its traces with a virus that would propagate in the rental system, snipping away all records.

The drive north took her past the air base but she didn’t slow down, just blended in with late afternoon traffic. Signs along the highway now had to warn about polar bears, recent migrants to the land and even more dangerous than the massive local browns. The terrain was just as she had memorized it on Google Earth, the likely shooting spots isolated, thickly wooded. The Internet maps got the seacoast wrong, though. Two Inuit villages had recently sprung up along the shore within Elmendorf, as one of their people, posing as a fisherman, had observed and photographed. Studying the pictures, she’d thought they looked slightly ramshackle, temporary, hastily thrown up in the exodus from the tundra regions. No need to last, as the Inuit planned to return north as soon as the Arctic cooled. The makeshift living arrangements had been part of the deal with the Arctic Council for the experiments to make that possible. But access to post schools, hospitals, and the PX couldn’t make this home to the Inuit, couldn’t replace their “beautiful land,” as the word used by the Labrador peoples named it.

So, too many potential witnesses there. The easy shoot from the coast was out. She drove on. The enterprising Inuit had a brand new diner set up along Glenn Highway, offering breakfast anytime to draw odd-houred Elmendorf workers, and she stopped for coffee. Dark men in jackets and jeans ate solemnly in the booths, not saying much. A young family sat across from her, the father trying to eat while bouncing his small wiggly daughter on one knee, the mother spooning eggs into a gleefully uncooperative toddler while fielding endless questions from her bespectacled school-age son. The little girl said something to make her father laugh, and he dropped a quick kiss on her shining hair. She cuddled in, pleased with herself, clinging tight as a limpet.

They looked harried but happy, close-knit and complete. Elinor flashed her smile, tried striking up conversations with the tired, taciturn workers, but learned nothing useful from any of them.

Going back into town, she studied the crews working on planes lined up at Elmendorf. Security was heavy on roads leading into the base so she stayed on Glenn. She parked the Ford as near the railroad as she could and left it. Nobody seemed to notice.

At seven, the sun still high overhead, she came down the school bus steps, a new creature. She swayed away in a long-skirted yellow dress with orange Mondrian lines, her shoes casual flats, carrying a small orange handbag. Brushed auburn hair, artful makeup, even long artificial eyelashes. Bait.

She walked through the scruffy district off K Street, observing as carefully as on her morning reconnaissance. The second bar was the right one. She looked over her competition, reflecting that for some women, there should be a weight limit for the purchase of spandex. Three guys with gray hair were trading lies in a booth and checking her out. The noisiest of them, Ted, got up to ask her if she wanted a drink. Of course she did, though she was thrown off by his genial warning, “Lady, you don’t look like you’re carryin’.”

Rattled—had her mask of harmless approachability slipped?—she made herself smile, and ask, “Should I be?”

“Last week a brown bear got shot not two blocks from here, goin’ through trash. The polars are bigger, meat-eaters, chase the young males out of their usual areas, so they’re gettin’ hungry, and mean. Came at a cop, so the guy had to shoot it. It sent him to the ICU, even after he put four rounds in it.” Not the usual pickup line, but she had them talking about themselves. Soon, she had most of what she needed to know about SkyShield.

“We were all retired refuel jockeys,” Ted said. “Spent most of 30 years flyin’ up big tankers full of jet fuel, so fighters and B-52s could keep flyin’, not have to touch down.”

Elinor probed, “So now you fly—”

“Same aircraft, most of ’em 40 years old—KC Stratotankers, or Extenders—they extend flight times, y’see.”

His buddy added, “The latest replacements were delivered just last year, so the crates we’ll take up are obsolete. Still plenty good enough to spray this new stuff, though.”

“I heard it was poison,” she said.

“So’s jet fuel,” the quietest one said. “But it’s cheap, and they needed something ready to go now, not that dust-scatter idea that’s still on the drawing board.”

Ted snorted. “I wish they’d gone with dustin’—even the traces you smell when they tank up stink like rottin’ eggs. More than a whiff, though, and you’re already dead. God, I’m sure glad I’m not a tank tech.”

“It all starts tomorrow?” Elinor asked brightly.

“Right, 10 KCs takin’ off per day, returnin’ the next from Russia. Lots of big-ticket work for retired duffers like us.”

“Who’re they?” she asked, gesturing to the next table. She had overheard people discussing nozzles and spray rates. “Expert crew,” Ted said. “They’ll ride along to do the measurements of cloud formation behind us, check local conditions like humidity and such.”

She eyed them. All very earnest, some a tad professorial. They were about to go out on an exciting experiment, ready to save the planet, and the talk was fast, eyes shining, drinks all around.

“Got to freshen up, boys.” She got up and walked by the tables, taking three quick shots in passing of the whole lot of them, under cover of rummaging through her purse. Then she walked around a corner toward the rest rooms, and her dress snagged on a nail in the wooden wall. She tried to tug it loose, but if she turned to reach the snag, it would rip the dress further. As she fished back for it with her right hand, a voice said, “Let me get that for you.”

Not a guy, but one of the women from the tech table. She wore a flattering blouse with comfortable, well-fitted jeans, and knelt to unhook the dress from the nail head.

“Thanks,” Elinor said, and the woman just shrugged, with a lopsided grin.

“Girls should stick together here,” the woman said. “The guys can be a little rough.”

“Seem so.”

“Been here long? You could join our group—always room for another woman, up here! I can give you some tips, introduce you to some sweet, if geeky, guys.”

“No, I… I don’t need your help.” Elinor ducked into the women’s room.

She thought about this unexpected, unwanted friendliness while sitting in the stall, and put it behind her. Then she went back into the game, fishing for information in a way she hoped wasn’t too obvious. Everybody likes to talk about their work, and when she got back to the pilots’ table, the booze worked in her favor. She found out some incidental information, probably not vital, but it was always good to know as much as you could. They already called the redesigned planes “Scatter Ships” and their affection for the lumbering, ungainly aircraft was reflected in banter about unimportant engineering details and tales of long-ago combat support missions.

One of the big guys with a wide grin sliding toward a leer was buying her a second martini when her cell rang.

“Albatross okay. Our party starts in 30 minutes,” said a rough voice. “You bring the beer.”

She didn’t answer, just muttered, “Damned salesbots…,” and disconnected.

She told the guy she had to “tinkle,” which made him laugh. He was a pilot just out of the Air Force, and she would have gone for him in some other world than this one. She found the back exit—bars like this always had one—and was blocks away before he would even begin to wonder.

THIS MIGHT BE THE LAST TIME SHE WOULD SEE SUCH ABUNDANT, GLOWING LIFE,

Anchorage slid past unnoticed as she hurried through the broad deserted streets, planning. Back to the bus, out of costume, into all-weather gear, boots, grab some trail mix and an already-filled backpack. Her thermos of coffee she wore on her hip.

She cut across Elderberry Park, hurrying to the spot where her briefing said the trains paused before running into the depot. The port and rail lines snugged up against Elmendorf Air Force Base, convenient for them, and for her.

The freight train was a long clanking string and she stood in the chill gathering darkness, wondering how she would know where they were. The passing autorack cars had heavy shutters, like big steel Venetian blinds, and she could not see how anybody got into them.

But as the line clanked and squealed and slowed, a quick laser flash caught her, winked three times. She ran toward it, hauling up onto a slim platform at the foot of a steel sheet.

It tilted outward as she scrambled aboard, thudding into her thigh, nearly knocking her off. She ducked in and saw by the distant streetlights the vague outlines of luxury cars. A Lincoln sedan door swung open. Its interior light came on and she saw two men in the front seats. She got in the back and closed the door. Utter dark.

“It clear out there?” the cell phone voice asked from the driver’s seat.

“Yeah. What—”

“Let’s unload. You got the SUV?”

“Waiting on the nearest street.”

“How far?”

“Hundred meters.”

The man jigged his door open, glanced back at her. “We can make it in one trip if you can carry 20 kilos.”

“Sure,” though she had to pause to quickly do the arithmetic, 44 pounds. She had backpacked about that much for weeks in the Sierras. “Yeah, sure.”

The missile gear was in the trunks of three other sedans, at the far end of the autorack. As she climbed out of the car the men had inhabited, she saw the debris of their trip—food containers in the back seats, assorted junk, the waste from days spent coming up from Seattle. With a few gallons of gas in each car, so they could be driven on and off, these two had kept warm running the heater. If that ran dry, they could switch to another.

As she understood it, this degree of mess was acceptable to the railroads and car dealers. If the railroad tried to wrap up the autoracked cars to keep them out, the bums who rode the rails would smash windshields to get in, then shit in the cars, knife the upholstery. So they had struck an equilibrium. That compromise inadvertently produced a good way to ship weapons right by Homeland Security. She wondered what Homeland types would make of a Dart, anyway. Could they even tell what it was?

The rough-voiced man turned and clicked on a helmet lamp. “I’m Bruckner. This is Gene.”

Nods. “I’m Elinor.” Nods, smiles. Cut to the chase. “I know their flight schedule.”

Bruckner smiled thinly. “Let’s get this done.”

Transporting the parts in via autoracked cars was her idea. Bringing them in by small plane was the original plan, but Homeland might nab them at the airport. She was proud of this slick workaround.

“Did railroad inspectors get any of you?” Elinor asked.

Gene said, “Nope. Our two extras dropped off south of here. They’ll fly back out.”

With the auto freights, the railroad police looked for tramps sleeping in the seats. No one searched the trunks. So they had put a man on each autorack, and if some got caught, they could distract from the gear. The men would get a fine, be hauled off for a night in jail, and the shipment would go on.

“Luck is with us,” Elinor said. Bruckner looked at her, looked more closely, opened his mouth, but said nothing.

They both seemed jumpy by the helmet light. “How’d you guys live this way?” she asked, to get them relaxed.

“Pretty poorly,” Gene said. “We had to shit in bags.”

She could faintly smell the stench. “More than I need to know.”

Using Bruckner’s helmet light they hauled the assemblies out, neatly secured in backpacks. Bruckner moved with strong, graceless efficiency. Gene too. She hoisted hers on, grunting.

The freight started up, lurching forward. “Damn!” Gene said.

They hurried. When they opened the steel flap, she hesitated, jumped, stumbled on the gravel, but caught herself. Nobody within view in the velvet cloaking dusk.

AND SHE SUCKED IT IN, TRYING TO LODGE IT IN HER HEART FOR TIMES TO COME.

They walked quietly, keeping steady through the shadows. It got cold fast, even in late May. At the Ford they put the gear in the back and got in. She drove them to the old school bus. Nobody talked.

She stopped them at the steps to the bus. “Here, put these gloves on.”

They grumbled but they did it. Inside, heater turned to high, Bruckner asked if she had anything to drink. She offered bottles of vitamin water but he waved it away. “Any booze?”

Gene said, “Cut that out.”

The two men eyed each other and Elinor thought about how they’d been days in those cars and decided to let it go. Not that she had any liquor, anyway.

Bruckner was lean, rawboned, and self-contained, with minimal movements and a constant, steady gaze in his expressionless face. “I called the pickup boat. They’ll be waiting offshore near Eagle Bay by eight.”

Elinor nodded. “First flight is 9:00 a.m.. It’ll head due north so we’ll see it from the hills above Eagle Bay.”

Gene said, “So we get into position… when?”

“Tonight, just after dawn.”

Bruckner said, “I do the shoot.”

“And we handle perimeter and setup, yes.”

“How much trouble will we have with the Indians?”

Elinor blinked. “The Inuit settlement is down by the seashore. They shouldn’t know what’s up.”

Bruckner frowned. “You sure?”

“That’s what it looks like. Can’t exactly go there and ask, can we?”

Bruckner sniffed, scowled, looked around the bus. “That’s the trouble with this nickel-and-dime operation. No real security.”

Elinor said, “You want security, buy a bond.”

Bruckner’s head jerked around. “Whassat mean?”

She sat back, took her time. “We can’t be sure the DARPA people haven’t done some serious public relations work with the Natives. Besides, they’re probably all in favor of SkyShield anyway—their entire way of life is melting away with the sea ice. And by the way, they’re not “Indians,” they’re “Inuit.”

“You seem pretty damn sure of yourself.”

“People say it’s one of my best features.”

Bruckner squinted and said, “You’re—”

“A maritime engineering officer. That’s how I got here and that’s how I’m going out.”

“You’re not going with us?”

“Nope, I go back out on my ship. I have first engineering watch tomorrow, 0100 hours.” She gave him a hard, flat look. “We go up the inlet, past Birchwood Airport. I get dropped off, steal a car, head south to Anchorage, while you get on the fishing boat, they work you out to the headlands. The bigger ship comes in, picks you up. You’re clear and away.”

Bruckner shook his head. “I thought we’d—”

“Look, there’s a budget and—”

“We’ve been holed up in those damn cars for—”

“A week, I know. Plans change.”

“I don’t like changes.”

“Things change,” Elinor said, trying to make it mild.

But Bruckner bristled. “I don’t like you cutting out, leaving us—”

“I’m in charge, remember.” She thought, He travels the fastest who travels alone.

“I thought we were all in this together.”

She nodded. “We are. But Command made me responsible, since this was my idea.”

His mouth twisted. “I’m the shooter, I—”

“Because I got you into the Ecuador training. Me and Gene, we depend on you.” Calm, level voice. No need to provoke guys like this; they did it enough on their own.

Silence. She could see him take out his pride, look at it, and decide to wait a while to even the score.

Bruckner said, “I gotta stretch my legs,” and clumped down the steps and out of the bus.

Elinor didn’t like the team splitting and thought of going after him. But she knew why Bruckner was antsy—too much energy with no outlet. She decided just to let him go.

To Gene she said, “You’ve known him longer. He’s been in charge of operations like this before?”

Gene thought. “There’ve been no operations like this.”

“Smaller jobs than this?”

“Plenty.”

She raised her eyebrows. “Surprising.”

“Why?”

“He walks around using that mouth, while he’s working?”

Gene chuckled. “ ’Fraid so. He gets the job done though.”

“Still surprising.”

That he’s the shooter, or—”

“That he still has all his teeth.

While Gene showered, she considered. Elinor figured Bruckner for an injustice collector, the passive-aggressive loser type. But he had risen quickly in The LifeWorkers, as they called themselves, brought into the inner cadre that had formulated this plan. Probably because he was willing to cross the line, use violence in the cause of justice. Logically, she should sympathize with him, because he was a lot like her.

But sympathy and liking didn’t work that way.

There were people who soon would surely yearn to read her obituary, and Bruckner’s too, no doubt. He and she were the cutting edge of environmental activism, and these were desperate times indeed. Sometimes you had to cross the line, and be sure about it.

Elinor had made a lot of hard choices. She knew she wouldn’t last long on the scalpel’s edge of active environmental justice, and that was fine by her. Her role would soon be to speak for the true cause. Her looks, her brains, her charm—she knew she’d been chosen for this mission, and the public one afterward, for these attributes, as much as for the plan she had devised. People listen, even to ugly messages, when the face of the messenger is pretty. And once they finished here, she would have to be heard.

She and Gene carefully unpacked the gear and started to assemble the Dart. The parts connected with a minimum of wiring and socket clasps, as foolproof as possible. They worked steadily, assembling the tube, the small recoil-less charge, snapping and clicking the connections.

Gene said, “The targeting antenna has a rechargeable battery, they tend to drain. I’ll top it up.”

She nodded, distracted by the intricacies of a process she had trained for a month ago. She set the guidance system. Tracking would first be infrared only, zeroing in on the target’s exhaust, but once in the air and nearing its goal, it would use multiple targeting modes—laser, IR, advanced visual recognition—to get maximal impact on the main body of the aircraft.

They got it assembled and stood back to regard the linear elegance of the Dart. It had a deadly, snakelike beauty, its shiny white skin tapered to a snub point.

“Pretty, yeah,” Gene said. “And way better than any Stinger. Next generation, smarter, near four times the range.”

She knew guys liked anything that could shoot, but to her it was just a tool. She nodded.

Gene caressed the lean body of the Dart, and smiled.

Bruckner came clumping up the bus stairs with a fixed smile on his face that looked like it had been delivered to the wrong address. He waved a lit cigarette. Elinor got up, forced herself to smile. “Glad you’re back, we—”

“Got some ’freshments,” he said, dangling some beers in their six-pack plastic cradle, and she realized he was drunk.

The smile fell from her face like a picture off a wall.

She had to get along with these two, but this was too much. She stepped forward, snatched the beer bottles and tossed them onto the Victorian love seat. “No more.”

Bruckner tensed and Gene sucked in a breath. Bruckner made a move to grab the beers and Elinor snatched his hand, twisted the thumb back, turned hard to ward off a blow from his other hand—and they froze, looking into each other’s eyes from a few centimeters away.

Silence.

Gene said, “She’s right, y’know.”

More silence.

Bruckner sniffed, backed away. “You don’t have to be rough.”

“I wasn’t.”

They looked at each other, let it go.

She figured each of them harbored a dim fantasy of coming to her in the brief hours of darkness. She slept in the lumpy bed and they made do with the furniture. Bruckner got the love seat—ironic victory—and Gene sprawled on a threadbare comforter.

Bruckner talked some but dozed off fast under booze, so she didn’t have to endure his testosterone-fueled patter. But he snored, which was worse.

The men napped and tossed and worried. No one bothered her, just as she wanted it. But she kept a small knife in her hand, in case. For her, sleep came easily.

After eating a cold breakfast, they set out before dawn, 2:30 a.m., Elinor driving. She had decided to wait till then because they could mingle with early morning Air Force workers driving toward the base. This far north, it started brightening by 3:30, and they’d be in full light before 5:00. Best not to stand out as they did their last reconnaissance. It was so cold she had to run the heater for five minutes to clear the windshield of ice. Scraping with her gloved hands did nothing.

The men had grumbled about leaving absolutely nothing behind. “No traces,” she said. She wiped down every surface, even though they’d worn medical gloves the whole time in the bus.

Gene didn’t ask why she stopped and got a gas can filled with gasoline, and she didn’t say. She noticed the wind was fairly strong and from the north, and smiled. “Good weather. Prediction’s holding up.”

Bruckner said sullenly, “Goddamn cold.”

“The KC Extenders will take off into the wind, head north.” Elinor judged the nearly cloud-free sky. “Just where we want them to be.”

They drove up a side street in Mountain View and parked overlooking the fish hatchery and golf course, so she could observe the big tank refuelers lined up at the loading site. She counted five KC-10 Extenders, freshly surplussed by the Air Force. Their big bellies reminded her of pregnant whales.

From their vantage point, they could see down to the temporarily expanded checkpoint, set up just outside the base. As foreseen, security was stringently tight this near the airfield—all drivers and passengers had to get out, be scanned, IDs checked against global records, briefcases and purses searched. K-9 units inspected car interiors and trunks. Explosives-detecting robots rolled under the vehicles.

She fished out binoculars and focused on the people waiting to be cleared. Some carried laptops and backpacks and she guessed they were the scientists flying with the dispersal teams. Their body language was clear. Even this early, they were jazzed, eager to go, excited as kids on a field trip. One of the pilots had mentioned there would be some sort of preflight ceremony, honoring the teams that had put all this together. The flight crews were studiedly nonchalant—this was an important, high-profile job, sure, but they couldn’t let their cool down in front of so many science nerds. She couldn’t see well enough to pick out Ted, or the friendly woman from the bar.

In a special treaty deal with the Arctic Council, they would fly from Elmendorf and arc over the North Pole, spreading hydrogen sulfide in their wakes. The tiny molecules of it would mate with water vapor in the stratospheric air, making sulfurics. Those larger, wobbly molecules reflected sunlight well—a fact learned from studying volcano eruptions back in the TwenCen. Spray megatons of hydrogen sulfide into the stratosphere, let water turn it into a sunlight-bouncing sheet—SkyShield—and they could cool the entire Arctic.

Or so the theory went. The Arctic Council had agreed to this series of large-scale experiments, run by the USA since they had the in-flight refuelers that could spread the tiny molecules to form the SkyShield. Small-scale experiments—opposed, of course, by many enviros—had seemed to work. Now came the big push, trying to reverse the retreat of sea ice and warming of the tundra.

Anchorage lay slightly farther north than Oslo, Helsinki, and Stockholm, but not as far north as Reykjavik or Murmansk. Flights from Anchorage to Murmansk would let them refuel and reload hydrogen sulfide at each end, then follow their paths back over the pole. Deploying hydrogen sulfide along their flight paths at 45,000 feet, they would spread a protective layer to reflect summer sunlight. In a few months, the sulfuric droplets would ease down into the lower atmosphere, mix with moist clouds, and come down as rain or snow, a minute, undetectable addition to the acidity already added by industrial pollutants. Experiment over.

The total mass delivered was far less than that from volcanoes like Pinatubo, which had cooled the whole planet in 1991–92. But volcanoes do messy work, belching most of their vomit into the lower atmosphere. This was to be a designer volcano, a thin skin of aerosols skating high across the stratosphere.

It might stop the loss of the remaining sea ice, the habitat of the polar bear. Only 10% of the vast original cooling sheets remained. Equally disruptive changes were beginning to occur in other parts of the world.

But geoengineered tinkerings would also be a further excuse to delay cutbacks in carbon dioxide emissions. People loved convenience, their air conditioning and winter heating and big lumbering SUVs. Humanity had already driven the air’s CO2 content to twice what it was before 1800, and with every developing country burning oil and coal as fast as they could extract them, only dire emergency could drive them to abstain. To do what was right.

The greatest threat to humanity arose not from terror, but error. Time to take the gloves off.

She put the binocs away and headed north. The city’s seacoast was mostly rimmed by treacherous mudflats, even after the sea kept rising. Still, there were coves and sandbars of great beauty. Elinor drove off Glenn Highway to the west, onto progressively smaller, rougher roads, working their way backcountry by Bureau of Land Management roads to a sagging, long-unused access gate for loggers. Bolt cutters made quick work of the lock securing its rusty chain closure. After she pulled through, Gene carefully replaced the chain and linked it with an equally rusty padlock, brought for this purpose. Not even a thorough check would show it had been opened, till the next time BLM tried to unlock it. They were now on Elmendorf, miles north of the airfield, far from the main base’s bustle and security precautions. Thousands of acres of mudflats, woods, lakes, and inlet shoreline lay almost untouched, used for military exercises and not much else. Nobody came here except for infrequent hardy bands of off-duty soldiers or pilots, hiking with maps red-marked UXO for “Unexploded Ordnance.” Lost live explosives, remnants of past field maneuvers, tended to discourage casual sightseers and trespassers, and the Inuit villagers wouldn’t be berry-picking till July and August. She consulted her satellite map, then took them on a side road, running up the coast. They passed above a cove of dark blue waters.

Beauty. Pure and serene.

The sea-level rise had inundated many of the mudflats and islands, but a small rocky platform lay near shore, thick with trees. Driving by, she spotted a bald eagle perched at the top of a towering spruce tree. She had started birdwatching as a Girl Scout and they had time; she stopped.

She left the men in the Ford and took out her long-range binocs. The eagle was grooming its feathers and eyeing the fish rippling the waters offshore. Gulls wheeled and squawked, and she could see sea lions knifing through fleeing shoals of herring, transient dark islands breaking the sheen of waves. Crows joined in onshore, hopping on the rocks and pecking at the predators’ leftovers.

She inhaled the vibrant scent of ripe wet salty air, alive with what she had always loved more than any mere human. This might be the last time she would see such abundant, glowing life, and she sucked it in, trying to lodge it in her heart for times to come.

She was something of an eagle herself, she saw now, as she stood looking at the elegant predator. She kept to herself, loved the vibrant natural world around her, and lived by making others pay the price of their own foolishness. An eagle caught hapless fish. She struck down those who would do evil to the real world, the natural one.

Beyond politics and ideals, this was her reality.

Then she remembered what else she had stopped for. She took out her cell phone and pinged the alert number.

A buzz, then a blurred woman’s voice. “Able Baker.”

“Confirmed. Get a GPS fix on us now. We’ll be here, same spot, for pickup in two to three hours. Assume two hours.”

Buzz buzz. “Got you fixed. Timing’s okay. Need a Zodiac?”

“Yes, definite, and we’ll be moving fast.”

“You bet. Out.”

Back in the cab, Bruckner said, “What was that for?”

“Making the pickup contact. It’s solid.”

“Good. But I meant, what took so long.”

She eyed him levelly. “A moment spent with what we’re fighting for.”

Bruckner snorted. “Let’s get on with it.”

Elinor looked at Bruckner and wondered if he wanted to turn this into a spitting contest just before the shoot.

“Great place,” Gene said diplomatically.

That broke the tension and she started the Ford.

They rose further up the hills northeast of Anchorage, and at a small clearing, she pulled off to look over the landscape. To the east, mountains towered in lofty gray majesty, flanks thick with snow. They all got out and surveyed the terrain and sight angles toward Anchorage. The lowlands were already thick with summer grasses, and the winds sighed southward through the tall evergreens.

Gene said, “Boy, the warming’s brought a lot of growth.”

Elinor glanced at her watch and pointed. “The KCs will come from that direction, into the wind. Let’s set up on that hillside.”

They worked around to a heavily wooded hillside with a commanding view toward Elmendorf Air Force Base. “This looks good,” Bruckner said, and Elinor agreed.

“Damn—a bear!” Gene cried.

They looked down into a narrow canyon with tall spruce. A large brown bear was wandering along a stream about a hundred meters away.

Elinor saw Bruckner haul out a .45 automatic. He cocked it.

When she glanced back the bear was looking toward them. It turned and started up the hill with lumbering energy.

“Back to the car,” she said.

The bear broke into a lope.

Bruckner said, “Hell, I could just shoot it. This is a good place to see the takeoff and—”

“No. We move to the next hill.”

Bruckner said, “I want—”

“Go!”

They ran.

One hill farther south, Elinor braced herself against a tree for stability and scanned the Elmendorf landing strips. The image wobbled as the air warmed across hills and marshes.

Lots of activity. Three KC-10 Extenders ready to go. One tanker was lined up on the center lane and the other two were moving into position.

“Hurry!” she called to Gene, who was checking the final setup menu and settings on the Dart launcher.

He carefully inserted the missile itself in the launcher. He checked, nodded and lifted it to Bruckner. They fitted the shoulder straps to Bruckner, secured it, and Gene turned on the full arming function. “Set!” he called.

Elinor saw a slight stirring of the center Extender and it began to accelerate. She checked: right on time, 0900 hours. Hard-core military like Bruckner, who had been a Marine in the Middle East, called Air Force the “saluting Civil Service,” but they did hit their markers. The Extenders were not military now, just surplus, but flying giant tanks of sloshing liquid around the stratosphere demands tight standards.

“I make the range maybe 20 kilometers,” she said. “Let it pass over us, hit it close as it goes away.”

Bruckner grunted, hefted the launcher. Gene helped him hold it steady, taking some of the weight. Loaded, it weighed nearly 50 pounds. The Extender lifted off, with a hollow, distant roar that reached them a few seconds later, and Elinor could see that media coverage was high. Two choppers paralleled the takeoff for footage, then got left behind.

The Extender was a full-extension DC-10 airframe and it came nearly straight toward them, growling through the chilly air. She wondered if the chatty guy from the bar, Ted, was one of the pilots. Certainly, on a maiden flight the scientists who ran this experiment would be on board, monitoring performance. Very well.

“Let it get past us,” she called to Bruckner.

He took his head from the eyepiece to look at her. “Huh? Why—”

“Do it. I’ll call the shot.”

“But I’m—”

“Do it.”

The airplane was rising slowly and flew by them a few kilometers away.

“Hold, hold…” she called. “Fire.”

Bruckner squeezed the trigger and the missile popped out—whuff!—seemed to pause, then lit. It roared away, startling in its speed—straight for the exhausts of the engines, then correcting its vectors, turning, and rushing for the main body. Darting.

It hit with a flash and the blast came rolling over them. A plume erupted from the airplane, dirty black.

“Bruckner! Resight—the second plane is taking off.”

She pointed. Gene chunked the second missile into the Dart tube. Bruckner swiveled with Gene’s help. The second Extender was moving much too fast, and far too heavy, to abort takeoff.

The first airplane was coming apart, rupturing. A dark cloud belched across the sky.

Elinor said clearly, calmly, “The Dart’s got a max range about right so… shoot.”

Bruckner let fly and the Dart rushed off into the sky, turned slightly as it sighted, accelerated like an angry hornet. They could hardly follow it. The sky was full of noise.

“Drop the launcher!” she cried.

“What?” Bruckner said, eyes on the sky.

She yanked it off him. He backed away and she opened the gas can as the men watched the Dart zooming toward the airplane. She did not watch the sky as she doused the launcher and splashed gas on the surrounding brush.

“Got that lighter?” she asked Bruckner.

He could not take his eyes off the sky. She reached into his right pocket and took out the lighter. Shooters had to watch, she knew.

She lit the gasoline and it went up with a whump.

“Hey! Let’s go!” She dragged the men toward the car.

They saw the second hit as they ran for the Ford. The sound got buried in the thunder that rolled over them as the first Extender hit the ground kilometers away, across the inlet. The hard clap shook the air, made Gene trip, then stagger forward.

She started the Ford and turned away from the thick column of smoke rising from the launcher. It might erase any fingerprints or DNA they’d left, but it had another purpose too.

She took the run back toward the coast at top speed. The men were excited, already reliving the experience, full of words. She said nothing, focused on the road that led them down to the shore. To the north, a spreading dark pall showed where the first plane went down.

One glance back at the hill told her the gasoline had served as a lure. A chopper was hammering toward the column of oily smoke, buying them some time.

The men were hooting with joy, telling each other how great it had been. She said nothing.

She was happy in a jangling way. Glad she’d gotten through without the friction with Bruckner coming to a point, too. Once she’d been dropped off, well up the inlet, she would hike around a bit, spend some time birdwatching, exchange horrified words with anyone she met about that awful plane crash—No, I didn’t actually see it, did you?—and work her way back to the freighter, slipping by Elmendorf in the chaos that would be at crescendo by then. Get some sleep, if she could.

They stopped above the inlet, leaving the Ford parked under the thickest cover they could find. She looked for the eagle, but didn’t see it. Frightened skyward by the bewildering explosions and noises, no doubt. They ran down the incline. She thumbed on her comm, got a crackle of talk, handed it to Bruckner. He barked their code phrase, got confirmation.

A Zodiac was cutting a V of white, homing in on the shore. The air rumbled with the distant beat of choppers and jets, the search still concentrated around the airfield. She sniffed the rotten egg smell, already here from the first Extender. It would kill everything near the crash, but this far off should be safe, she thought, unless the wind shifted. The second Extender had gone down closer to Anchorage, so it would be worse there. She put that out of her mind.

Elinor and the men hurried down toward the shore to meet the Zodiac. Bruckner and Gene emerged ahead of her as they pushed through a stand of evergreens, running hard. If they got out to the pickup craft, then suitably disguised among the fishing boats, they might well get away.

But on the path down, a stocky Inuit man stood. Elinor stopped, dodged behind a tree.

Ahead of her, Bruckner shouted, “Out of the way!”

The man stepped forward, raised a shotgun. She saw something compressed and dark in his face.

“You shot down the planes?” he demanded.

A tall Inuit racing in from the side shouted, “I saw their car comin’ from up there!”

Bruckner slammed to a stop, reached down for his .45 automatic—and froze. The double-barreled shotgun could not miss at that range.

It had happened so fast. She shook her head, stepped quietly away. Her pulse hammered as she started working her way back to the Ford, slipping among the trees. The soft loam kept her footsteps silent.

A third man came out of the trees ahead of her. She recognized him as the young Inuit father from the diner, and he cradled a black hunting rifle. “Stop!”

She stood still, lifted her binocs. “I’m bird watching, what—”

“I saw you drive up with them.”

A deep, brooding voice behind her said, “Those planes were going to stop the warming, save our land, save our people.”

She turned to see another man pointing a large caliber rifle. “I, I, the only true way to do that is by stopping the oil companies, the corporations, the burning of fossil—”

The shotgun man, eyes burning beneath heavy brows, barked, “What’ll we do with ‘em?”

She talked fast, hands up, open palms toward him. “All that SkyShield nonsense won’t stop the oceans from turning acid. Only fossil—”

“Do what you can, when you can. We learn that up here.” This came from the tall man. The Inuit all had their guns trained on them now. The tall man gestured with his and they started herding the three of them into a bunch. The men’s faces twitched, fingers trembled.

The man with the shotgun and the man with the rifle exchanged nods, quick words in a complex, guttural language she could not understand. The rifleman seemed to dissolve into the brush, steps fast and flowing, as he headed at a crouching dead run down to the shoreline and the waiting Zodiac.

She sucked in the clean sea air and could not think at all. These men wanted to shoot all three of them and so she looked up into the sky to not see it coming. High up in a pine tree with a snapped top an eagle flapped down to perch. She wondered if this was the one she had seen before.

The oldest of the men said, “We can’t kill them. Let ‘em rot in prison.”

The eagle settled in. Its sharp eyes gazed down at her and she knew this was the last time she would ever see one. No eagle would ever live in a gray box. But she would. And never see the sky.


Is U.S. Science in Decline?

Is U.S. Science in Decline?

YU XIE

Is U.S. Science in Decline?

The nation’s position relative to other countries is changing, but this need not be reason for alarm.

Who are the most important U.S. scientists today?” Our host posed the question to his guests at a dinner that I attended in 2003. Americans like to talk about politicians, entertainers, athletes, writers, and entrepreneurs, but rarely, if ever, scientists. Among a group of six academics from elite U.S. universities at the dinner, no one could name a single outstanding contemporary U.S. scientist.

This was not always so. For much of the 20th century, Albert Einstein was a household-name celebrity in the United States, and every academic was familiar with names such as James Watson, Enrico Fermi, and Edwin Hubble. Today, however, Americans’ interest in pure science, unlike their interest in new “apps,” seems to have waned. Have the nation’s scientific achievements and strengths also lessened? Indeed, scholars and politicians alike have begun to worry that U.S. science may be in decline.

If the United States loses its dominance in science, historians of science would be the last group to be surprised. Historically, the world center of science has shifted several times, from Renaissance Italy to England in the 17th century, to France in the 18th century, and to Germany in the 19th century, before crossing the Atlantic in the early 20th century to the United States. After examining the cyclical patterns of science centers in the world with historical data, Japanese historian of science Mitsutomo Yuasa boldly predicted in1962 that “the scientific prosperity of [the] U.S.A., begun in 1920, will end in 2000.”

Needless to say, Yuasa’s prediction was wrong. By all measures, including funding, total scientific output, highly influential scientific papers, and Nobel Prize winners, U.S. leadership in science remains unparalleled today. Containing only 5% of the world’s total population, the United States can consistently claim responsibility for one- to two-thirds of the world’s scientific activities and accomplishments. Present-day U.S. science is not a simple continuation of science as it was practiced earlier in Europe. Rather, it has several distinctive new characteristics: It employs a very large labor force; it requires a great deal of funding from both government and industry; and it resembles other professions such as medicine and law in requiring systematic training for entry and compensating for services with financial, as well as nonfinancial, rewards. All of these characteristics of modern science are the result of dramatic and integral developments in science, technology, industry, and education in the United States over the course of the 20th century. In the 21st century, however, a debate has emerged concerning U.S. ability to maintain its world leadership in the future.

The U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force.

The debate involves two opposing views. The first view is that U.S. science, having fallen victim to a new, highly competitive, globalized world order, particularly to the rise of China, India, and other Asian countries, is now declining. Proponents of this alarmist view call for significantly more government investment in science, as stated in two reports issued by the National Academy of Sciences (NAS), the National Academy of Engineering, and the Institute of Medicine: Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future in 2007, and Rising Above the Gathering Storm: Rapidly Approaching Category 5 in 2010.

The second view is that if U.S. science is in trouble, this is because there are too many scientists, not too few. Newly trained scientists have glutted the scientific labor market and contribute low-cost labor to organized science but are unable to become independent and, thus, highly innovative. Proponents of the second view, mostly economists, are quick to point out that claims concerning a shortage of scientific personnel are often made by interest groups—universities, senior scientists, funding agencies, and industries that employ scientifically trained workers—that would benefit from an increased supply of scientists. This view is well articulated in two reports issued by the RAND Corporation in 2007 and 2008 in response to the first NAS report and economist Paula Stephan’s recent book, How Economics Shapes Science.

What do data reveal?

Which view is correct? In a 2012 book I coauthored with Alexandra Killewald, Is American Science in Decline?, we addressed this question empirically, drawing on as much available data as we could find covering the past six decades. After analyzing 18 large, nationally representative data sets, in addition to a wealth of published and Web-based materials, we concluded that neither view is wholly correct, though both have some merit.

Between the 1960s and the present, U.S. science has fared reasonably well on most indicators that we can construct. The following is a summary of the main findings reported in our book.

First, the U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force. Census data show that the scientific labor force has increased steadily since the 1960s. In 1960, science and engineering constituted 1.3% of the total labor force of about 66 million. By 2007, it was 3.3% of a much larger labor force of about 146 million. Of course, between 1960 and 2007, the share of immigrants among scientists increased, at a time when all Americans were becoming better educated. As a result, the percentage of scientists among native-born Americans with at least a college degree has declined over time. However, diversity has increased as women and non-Asian minorities have increased their representation among U.S. scientists.

Second, despite perennial concerns about the performance of today’s students in mathematics and science, today’s U.S. schoolchildren are performing in these areas as well as or better than students in the 1970s. At the postsecondary level, there is no evidence of a decline in the share of graduates receiving degrees in scientific fields. U.S. universities continue to graduate large numbers of young adults well trained in science, and the majority of science graduates do find science-related employment. At the graduate level, the share of foreign students among recipients of science degrees has increased over time. More native-born women now receive science degrees than before, although native-born men have made no gains. Taken together, education data suggest that Americans are doing well, or at least no worse than in the past, at obtaining quality science education and completing science degrees.

Finally, we used a large number of indicators to track changes in society’s general attitudes toward science, including confidence in science, whether to fund basic science, scientists’ prestige, and freshmen interest in science research careers. Those indicators all show that the U.S. public has remained overwhelmingly positive toward scientists and science in general. About 80% of Americans endorse federal funding for scientific research, even if it has no immediate benefits, and about 70% believe that the benefits of science outweigh the costs. These numbers have stayed largely unchanged over recent decades. Americans routinely express greater confidence in the leadership of the scientific community than that of Congress, organized religion, or the press.

Is it possible that Americans support science even though they themselves have no interest in it? To measure Americans’ interest in science, we also analyzed all cover articles published in Newsweek magazine and all books on the New York Times Best Sellers List from 1950 to 2007. From these data, we again observe an overall upward trend in Americans’ interest in science.

Sources of anxiety

What, then, are the sources of anxiety about U.S. science? In Is American Science in Decline?, we identify three of them, two historical and one comparative. First, our analysis of earnings using data from the U.S. decennial censuses revealed that scientists’ earnings have grown very slowly, falling further behind those of other high-status professionals such as doctors and lawyers. This unfavorable trend is particularly pronounced for scientists at the doctoral level.

Second, scientists who seek academic appointments now face greater challenges. Tenure-track positions are in short supply relative to the number of new scientists with doctoral training seeking such positions. As a result, more and more young scientists are now forced to take temporary postdoctoral appointments before finding permanent jobs. Job prospects are particularly poor in biomedical science, which has been well supported by federal funding through the National Institutes of Health. The problem is that the increased spending is mainly in the form of research grants that enhance research labs’ ability to hire temporary research staff, whereas universities are reluctant to expand permanent faculty positions. Some new Ph.D.s in biomedical fields need to take on two or more postdoctoral or temporary positions before having a chance to find a permanent position. It is the poor job outlook for these new Ph.D.s and their relatively low earnings that has led some economists to argue that there is a glut of scientists in the United States.

Third, of course, the greatest source of anxiety concerning U.S. science has been the globalization of science, resulting in greater competition from other countries. Annual news releases reveal the mediocre performance of U.S. schoolchildren on international tests of math and science. The growth of U.S. production of scientific articles has slowed down considerably over the past several decades as compared with that in other areas, particularly East Asia. As a result, the share of world science contributed by the United States is dwindling.

But in some ways, the globalization of science is a result of U.S. science’s success. Science is a public good, and a global one at that. Once discovered, science knowledge is codified and then can be taught and consumed anywhere in the world. The huge success of U.S. science in the 20th century meant that scientists in many less developed countries, such as China and India, could easily build on the existing science foundation largely built by U.S. scientists and make new scientific discoveries. Internet communication and cheap air transportation have also minimized the importance of location, enabling scientists in less developed countries to have access to knowledge, equipment, materials, and collaborators in more developed countries such as the United States.

The globalization of science has also made its presence felt within U.S. borders. More than 25% of practicing U.S. scientists are immigrants, up from 7% in 1960. Almost half of students receiving doctoral degrees in science from U.S. universities are temporary residents. The rising share of immigrants among practicing scientists and engineers indicates that U.S. dependence on foreign-born and foreign-trained scientists has dramatically increased. Although most foreign recipients of science degrees from U.S. universities today prefer to stay in the United States, for both economic and scientific reasons, there is no guarantee that this will last. If the flow of foreign students to U.S. science programs should stop or dramatically decline, or if most foreign students who graduate with U.S. degrees in science should return to their home countries, this could create a shortage of U.S. scientists, potentially affecting the U.S. economy or even national security.

What’s happening in China?

Although international competition doesn’t usually refer to any specific country in discussions of science policy, today’s discourse does tend to refer, albeit implicitly, to a single country: China. In 2009, national headlines revealed that students in Shanghai outscored their peers around the world in math, science, and reading on the Program for International Student Assessment (PISA), a test administered to 15-year-olds in 65 countries. In contrast, the scores of U.S. students were mediocre. Although U.S. students had performed similarly on these comparative tests for a long time, the 2009 PISA results had an unusual effect in sparking a national discussion of the proposition that the United States may soon fall behind China and other countries in science and technology. Secretary of Education Arne Duncan referred to the results as “a wake-up call.”

China is the world’s largest country, with a population of 1.3 billion, and grew economically at an annualized growth rate of 7.7% between 1978 and 2010. Other indicators also suggest that China has been developing its science and technology with the intention of narrowing the gap between itself and the United States. Activities in China indicate its inevitable rise as a powerhouse in science and technology, and it is important to understand what this means for U.S. science.

The Chinese government has spent large sums of money trying to upgrade Chinese science education and improve China’s scientific capability. It more than doubled the number of higher education institutions from 1,022 in 1998 to 2,263 in 2008 and upgraded about 100 elite universities with generous government funding. China’s R&D expenditure has been growing at 20% per year, benefitting both from the increase in gross domestic product (GDP) and the increase in the share of GDP spent on R&D. In addition, the government has devised various attractive programs, such as the Changjiang Scholars Program and the Thousand Talent Program, to lure expatriate Chinese-born scientists, particularly those working in the United States, back to work in China on a permanent or temporary basis.

The government’s efforts to improve science education seem to have paid off. China is now by far the world’s leader in bachelor’s degrees in science and engineering, with 1.1 million in 2010, more than four times the U.S. number. This large disparity reflects not only China’s dramatic expansion in higher education since 1999 but also the fact that a much higher percentage of Chinese students major in science and engineering, around 44% in 2010, compared to 16% in the United States. Of course, China’s population is much larger. Adjusting for population size differences, the two countries have similar proportions of young people with science and engineering bachelor’s degrees. China’s growth in the production of science and engineering doctoral degrees has been comparably dramatic, from only 10% of the U.S. total in 1993 to a level exceeding that in the United States by 18% in 2010. Of course, questions have been raised both in China and abroad about whether the quality of a Chinese doctoral degree is equivalent to that of a U.S. degree.

The impact of China’s heavy investment in scientific research is also unmistakable. Data from Thomson Reuters’ InCites and Essential Science Indicators databases indicate that China’s production of scientific articles grew at an annual rate of 15.4% between 1990 and 2011. In terms of total output, China overtook the United Kingdom in 2004, and Japan and Germany in 2005, and has since remained second only to the United States. The data also reveal that the quality of papers produced by Chinese scientists, measured by citations, has increased rapidly. China’s production of highly cited articles achieved parity with Germany and the United Kingdom around 2009 and reached a level of 31% the U.S. rate in 2011.

Four factors favor China’s rise in science: a large population and human capital base, a large diaspora of Chinese-origin scientists, a culture of academic meritocracy, and a centralized government willing to invest in science. However, China’s rise in science also faces two major challenges: a rigid, top-down administration system known for misallocating resources, and rising allegations of scientific misconduct in a system where major decisions about funding and rewards are made by bureaucrats rather than peer scientists. Given these features, Chinese science is likely to do well in research areas where research output depends on material and human resources; i.e., extensions of proven research lines rather than truly innovative advances into unchartered territories. Given China’s heavy emphasis on its economic development, priority is also placed on applied rather than basic research. These characteristics of Chinese science mean that U.S. scientists could benefit from collaborating with Chinese scientists in complementary and mutually beneficial ways. For example, U.S. scientists could design studies to be tested in well-equipped and well-staffed laboratories in China.

Science in a new world order

Science is now entering a new world order and may have changed forever. In this new world order, U.S. science will remain a leader but not in the unchallenged position of dominance it has held in the past. In the future, there will no longer be one major world center of science but multiple centers. As more scientists in countries such as China and India actively participate in research, the world of science is becoming globalized as a single world community.

A more competitive environment on the international scene today does not necessarily mean that U.S. science is in decline. Just because science is getting better in other countries, this does not mean that it’s getting worse in the United States. One can imagine U.S. science as a racecar driver, leading the pack and for the most part maintaining speed, but anxiously checking the rearview mirror as other cars gain in the background, terrified of being overtaken. Science, however, is not an auto race with a clear finish line, nor does it have only one winner. On the contrary, science has a long history as the collective enterprise of the entire human race. In most areas, scientists around the world have learned from U.S. scientists and vice versa. In some ways, U.S. science may have been too successful for its own good, as its advancements have improved the lives of people in other nations, some of which have become competitors for scientific dominance.

Hence, globalization is not necessarily a threat to the wellbeing of the United States or its scientists. As more individuals and countries participate in science, the scale of scientific work increases, leading to possibilities for accelerated advancements. World science may also benefit from fruitful collaborations of scientists in different environments and with different perspectives and areas of expertise. In today’s ever more competitive globalized science, the United States enjoys the particular advantage of having a social environment that encourages innovation, values contributions to the public good, and lives up to the ideal of equal opportunity for all. This is where the true U.S. advantage lies in the long run. This is also the reason why we should remain optimistic about U.S. science in the future.

Recommended reading

J. M. Diamond, Guns, Germs and Steel: The Fates of Human Societies (New York: W.W. Norton & Company, 1999).

Thomas Friedman, The World Is Flat: A Brief History of the Twenty-First Century (New York: Farrar, Straus, and Giroux, 2005).

Titus Galama and James R. Hosek, eds., Perspectives on U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2007).

Titus Galama and James R. Hosek, eds., U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2008).

Claudia Dale Goldin and Lawrence F. Katz, The Race between Education and Technology (Cambridge, MA: Belknap Press of Harvard University Press, 2008).

Alexandra Killewald and Yu Xie, “American Science Education in its Global and Historical Contexts,” Bridge (Spring 2013): 15-23.

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future (Washington, DC: National Academies Press, 2007).

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Rapidly Approaching Category 5 (Washington, DC: National Academies Press, 2010).

Organisation for Economic Co-operation and Development, PISA 2009 Results: Executive Summary (2010); available online at www.oecd.org/pisa/pisaproducts/46619703.pdf.

Paula Stephan, How Economics Shapes Science (Cambridge, MA: Harvard University Press, 2012).

Yu Xie and Alexandra A. Killewald, Is American Science in Decline? (Cambridge, MA: Harvard University Press, 2012).


Yu Xie is Otis Dudley Duncan Distinguished University Professor of Sociology, Statistics, and Public Policy at the University of Michigan. This article is adapted from the 2013 Henry and Bryna David Lecture, which he presented at the National Academy of Sciences.