Author Archives: issues

Military Innovation and the Prospects for Defense-Led Energy Innovation

EUGENE GHOLZ

Although the Department of Defense has long been the global innovation leader in military hardware, that capability is not easily applied to energy technology

Almost all plans to address climate change depend on innovation, because the alternatives by themselves—reducing greenhouse gas emissions via the more efficient use of current technologies or by simply consuming less of everything—are either insufficient, intolerable, or both. Americans are especially proud of their history of technology leadership, but in most sectors of the economy, they assume that private companies, often led by entrepreneurs and venture capitalists, will furnish the new products and processes. Unfortunately, energy innovation poses exceptionally severe collective action problems that limit the private sector’s promise. Everyone contributes emissions, but no one contributes sufficient emissions that a conscious effort to reduce them will make a material difference in climate change, so few people try hard. Without a carbon tax or emissions cap, most companies have little or no economic incentive to reduce emissions except as a fortuitous byproduct of other investments. And the system of production, distribution, and use of energy creates interdependencies across companies and countries that limit the ability of any one actor to unilaterally make substantial changes.

In principle, governments can overcome these problems through policies to coordinate and coerce, but politicians are ever sensitive to imposing costs on their constituents. They avoid imposing taxes and costly regulations whenever possible. Innovation presents the great hope to solve problems at reduced cost. In the case of climate change, because of the collective action problems, government will have to lead the innovative investment.

Fortunately, the U.S. government has a track record of success with developing technologies to address another public good. Innovation is a hallmark of the U.S. military. The technology that U.S. soldiers, sailors, and airmen bring to war far outclasses adversaries’. Even as Americans complain about challenges of deploying new military equipment, always wishing that technical solutions could do more and would arrive faster to the field, they also take justifiable pride in the U.S. defense industry’s routine exploitation of technological opportunities. Perhaps that industry’s technology savvy could be harnessed to develop low-emissions technologies. And perhaps the Defense Department’s hefty purse could purchase enough to drive the innovations down the learning curve, so that they could then compete in commercial markets as low-cost solutions, too.

That potential has attracted considerable interest in defense-led energy innovation. In fact, in 2008, one of the first prominent proposals to use defense acquisition to reduce energy demand came from the Defense Science Board, a group of expert advisors to the Department of Defense (DOD) itself. The DSB reported, “By addressing its own fuel demand, DoD can serve as a stimulus for new energy efficiency technologies…. If DoD were to invest in technologies that improved efficiency at a level commensurate with the value of those technologies to its forces and warfighting capability, it would probably become a technology incubator and provide mature technologies to the market place for industry to adopt for commercial purposes.” Various think tanks took up the call from there, ranging from the CNA Corporation (which includes the Center for Naval Analyses) to the Pew Charitable Trusts’ Project on National Security, Energy and Climate. Ultimately, the then–Deputy Assistant to the President for Energy and Climate Change, Heather Zichal, proclaimed her hope for defense-led energy innovation on the White House blog in 2013.

These advocates hope not only to use the model of successful military innovation to stimulate innovation for green technologies but to actually use the machinery of defense acquisition to implement their plan. They particularly hope that the DOD will use its substantial procurement budget to pull the development of new energy technologies. Even when the defense budget faces cuts as the government tries to address its debt problem, other kinds of government discretionary investment are even more threatened, making defense ever more attractive to people who hope for new energy technologies.

The U.S. government has in part adopted this agenda. The DOD and Congress have created a series of high-profile positions that include an Assistant Secretary of Defense for Operational Energy Plans and Programs within the Pentagon’s acquisition component. No one in the DOD’s leadership wants to see DOD investment diverted from its primary purpose of providing for American national security, but the opportunity to address two important policy issues at the same time is very appealing.

The appeal of successful military innovation is seductive, but the military’s mixed experience with high-tech investment should restrain some of the exuberance about prospects for energy innovation. We know enough about why some large-scale military innovation has worked, while some has not, to predict which parts of the effort to encourage defense-led energy innovation are likely to be successful; enough to refine our expectations and target our investment strategies. This article carefully reviews the defense innovation process and its implications for major defense-led energy innovation.

Defense innovation works because of a particular relationship between the DOD and the defense industry that channels investment toward specific technology trajectories. Successes on “nice-to-have” trajectories, from DOD’s perspective, are rare, because the leadership’s real interest focuses on national security. Civilians are well aware of the national security and domestic political risks of even the appearance of distraction from core warfighting missions. When it is time to make hard choices, DOD leadership will emphasize performance parameters directly related to the military’s critical warfighting tasks, as essentially everyone agrees it should. Even in the relatively few cases in which investment to solve the challenges of the energy sector might directly contribute to the military component of the U.S. national security strategy, advocates will struggle to harness the defense acquisition apparatus. But a focused understanding of how that apparatus works will make their efforts more likely to succeed.

42
Jamey Stillings #26, 15 October 2010. Fine art archival print. Aerial view over the future site of the Ivanpah Solar Electric Generating System prior to full commencement of construction, Mojave Desert, CA, USA.

Jamey Stillings

Photographer Jamey Stillings’ fascination with the human-altered landscape and his concerns for environmental sustainability led him to document the development of the Ivanpah Solar Power Facility. Stillings took 18 helicopter flights to photograph the plant, from its groundbreaking in October 2010 through its official opening in February 2014. Located in the Mojave Desert of California, Ivanpah Solar is the world’s largest concentrated solar thermal power plant. It spans nearly 4,000 acres of public land and deploys 173,500 heliostats (347,000 mirrors) to focus the sun’s energy on three towers, creating 392 megawatts of electricity or enough to power 140,000 homes.

The photographs in this series formed the basis for Stillings’ current project, Changing Perspectives on Renewable Energy Development, an aerial and ground-based photographic examination of large-scale renewable energy initiatives in the American West and beyond.

Stillings’ three-decade career spans documentary, fine art, and commissioned projects. Based in Santa Fe, New Mexico, he holds an MFA in photography from Rochester Institute of Technology, New York. His work is in the collections of the Library of Congress, Washington, DC; the Museum of Fine Arts, Houston; and the Nevada Museum of Art, Reno, among others, and has been published in The New York Times Magazine, Smithsonian, and fotoMagazin. His second monograph, The Evolution of Ivanpah Solar, will be published in 2015 by Steidl.

—Alana Quinn

43
Jamey Stillings #4546, 28 July 2011. Fine art archival print. Aerial overview of Solar Field 1 before heliostat construction, looking northeast toward Primm, NV.

How weapons innovation has succeeded

Defense acquisition is organized by programs, the largest and most important of which are almost always focused on developing a weapons system, although sometimes the key innovations that lead to improved weapons performance come in a particular component. For example, a new aircraft may depend on a better jet engine or avionics suite, but the investment is usually organized as a project to develop a fighter rather than one or more key components. Sometimes the DOD buys major items of infrastructure such as a constellation of navigation satellites, but those systems’ performance metrics are usually closely tied to weapons’ performance; for example, navigation improves missile accuracy, essential for modern warfare’s emphasis on precision strike. Similarly, a major improvement in radar can come as part of a weapons system program built around that new technology, as the Navy’s Aegis battle management system incorporated the SPY-1 phased array radar on a new class of ships. To incorporate energy innovation into defense acquisition, the DOD and the military services would similarly add energy-related performance parameters to their programs, most of which are weapons system programs. The military’s focus links technology to missions. Each project relies on a system of complex interactions of military judgment, congressional politics, and defense industry technical skill.

44
Jamey Stillings #8704, 27 October 2012. Fine art archival print. Aerial view showing delineation of future solar fields around an existing geologic formation.

Defense innovation has worked best when customers—DOD and the military services—understand the technology trajectory that they are hoping to pull and when progress along that technology trajectory is important to the customer organization’s core mission. Under those circumstances, the customer protects the research effort, provides useful feedback during the process, adequately (or generously) funds the development, and happily buys the end product, often helping the developer appeal to elected leaders for funding. The alliance between the military customer and private firms selling the innovation can overcome the tendency to free ride that plagues investment in public goods such as defense and energy security.

Demand pull to develop major weapons systems is not the only way in which the United States has innovated for defense, but it is the principal route to substantial change. At best, other innovation dynamics, especially technology-push efforts that range from measured investments to support manufacturing scale-up to the Defense Advanced Research Project Agency’s drive for leap-ahead inventions, tend to yield small improvements in the performance of deployed systems in the military’s inventory. More often, because technological improvement itself is rarely sufficient to create demand, inventions derived from technology-push R&D struggle to find a home on a weapons system: Program offices, which actually buy products and thereby create the demand that justifies building production-scale factories, tend to feel that they would have funded the R&D themselves, if the invention were really needed to meet their performance requirements. Bolting on a new technology developed outside the program also can add technological risk—what if the integration does not work smoothly?—and program managers shun unnecessary risk. The partial exceptions are inventions such as stealth, where the military quickly connected the new technology to high-priority mission performance.

But most technology-push projects that succeed yield small-scale innovations that can matter a great deal at the level of local organizations but do not attract sufficient resources and political attention to change overall national capabilities. In energy innovation, an equivalent example would be a project to develop a small solar panel to contribute to electricity generation at a remote forward operating base, the sort of boon to warfighters that has attracted some attention during the Afghanistan War but that contributes to a relatively low-profile acquisition program (power generation as opposed to, say, a new expeditionary fighting vehicle) and will not even command the highest priority for that project’s program manager (who must remain focused on baseload power generation rather than solar augmentation).

In the more important cases of customer-driven military innovations, military customers are used to making investment decisions based on interests other than the pure profit motive. Defense acquisition requirements derive from leaders’ military judgment about the strategic situation, and the military gets the funding for needed research, development, and procurement from political leaders rather than profit-hungry investors. This process, along with the military’s relatively large purse as compared to even the biggest commercial customers, is precisely what attracts the interest of advocates of defense-led energy innovation: Because of the familiar externalities and collective action problems in the energy system, potential energy innovations often do not promise a rate of return sufficient to justify the financial risk of private R&D spending, but the people who make defense investments do not usually calculate financial rates of return anyway.

A few examples demonstrate the importance of customer preferences in military innovation. When the Navy first started its Fleet Ballistic Missile program, its Special Projects Office had concepts to give the Navy a role in the nuclear deterrence mission but not much money initially to develop and build the Polaris missiles. Lockheed understood that responsiveness was a key trait in the defense industry, so the company used its own funds initially to support development to the customer’s specifications. As a result, Lockheed won a franchise for the Navy’s strategic systems that continues today in Sunnyvale, California, more than 50 years later.

In contrast, at roughly the same time as Lockheed’s decision to emphasize responsiveness, the Curtiss-Wright Corporation, then a huge military aircraft company, attempted to use political channels and promises of great performance to sell its preferred jet engine design. However, Air Force buyers preferred the products of companies that followed the customer’s lead, and Curtiss-Wright fell from the ranks of leading contractors even in a time of robust defense spending. Today, after great effort and years in the wilderness, the company has rebuilt to the stature of a mid-tier defense supplier with a name recognized by most (but not all) defense industry insiders.

When it is time to make hard choices, DOD leadership will emphasize performance parameters directly related to the military’s critical warfighting tasks, as essentially everyone agrees it should.

46
Jamey Stillings #9712, 21 March 2013. Fine art archival print. Aerial view of installed heliostats.

The contrasting experiences of Lockheed and Curtiss-Wright show the crucial importance of following the customer’s lead in the U.S. defense market. Entrepreneurs can bring what they think are great ideas to the DOD, including ideas for great new energy technologies, but the department tends to put its money where it wants to, based on its own military judgment.

Although the U.S. military can be a difficult customer if the acquisition executives lose faith in a supplier’s responsiveness, the military can also be a forgiving customer if firms’ good-faith efforts do not yield products that live up to all of the initial hype, at least for programs that are important to the Services’ core missions. A technology occasionally underperforms to such an extent that a program is cancelled (for example, the ill-fated Sergeant York self-propelled antiaircraft gun of the 1980s) but in many cases, the military accepts equipment that does not meet its contractual performance specifications. The Services then either nurture the technology through years of improvements and upgrades or discover that the system is actually terrific despite failing to meet the “required” specs. The B-52 bomber is perhaps the paradigm case: It did not meet its key performance specifications for range, speed, or payload, but it turned out to be such a successful aircraft that it is still in use 50 years after its introduction and is expected to stay in the force for decades to come. The Army similarly stuck with the Bradley Infantry Fighting Vehicle through a difficult development history. Trying hard and staying friendly with the customer is the way to succeed as a defense supplier, and because the military is committed to seeking technological solutions to strategic problems, major defense contractors have many opportunities to innovate.

This pattern stands in marked contrast to private and municipal government investment in energy infrastructure, where underperformance in the short term can sour investors on an idea for decades. The investors may complete the pilot project, because municipal governments are not good at cutting their losses after the first phase of costs are sunk (though corporations may be more ruthless, for example in GM’s telling of the story of the EV-1 electric car). But almost no one else wants to risk repeating the experience, even if project managers can make a reasonable case that the follow-on project would perform better as a result of learning from the first effort.

And it’s the government—so politicians play a role

Of course, military desire for a new technology is not sufficient by itself to get a program funded in the United States. Strong political support from key legislators has also been a prerequisite for technological innovation. Over the years, the military and the defense contractors have learned to combine performance specifications with political logic. The best way to attract political support is to promise heroic feats of technological progress, because the new system should substantially outperform the equipment in the current American arsenal, even if that previous generation of equipment was only recently purchased at great expense. The political logic simply compounds the military’s tendency for technological optimism, creating tremendous technology pull.

In fact, Congress would not spend our tax dollars on the military without some political payoff, because national security poses a classic collective action problem. All citizens benefit from spending on national defense whether they help pay the cost or not, so the government spends tax dollars rather than inviting people to voluntarily contribute. But taxes are not popular, and raising money to provide public goods is a poor choice for a politician unless he can find a specific political benefit from the spending in addition to furthering the diffuse general interest.

Military innovations’ political appeal prevents the United States from underinvesting in technological opportunities. Sometimes that appeal comes from ideology, such as the “religion” that supports missile defense. Sometimes the appeal comes from an idiosyncratic vision: for example, a few politicians like Sen. John Warner contributed to keeping unmanned aerial vehicle (UAV) programs alive before 9/11, before the War on Terror made drone strikes popular. And sometimes the appeal comes from the ability to feed defense dollars to companies in a legislator’s district. In the UAV case, Rep. Norm Dicks, who had many Boeing employees in his Washington State district, led political efforts to continue funding UAV programs after the end of the Cold War.

47
Jamey Stillings #7626, 4 June 2012. Fine art archival print. Workers install a heliostat on a pylon. Background shows assembled heliostats in “safe” or horizontal mode. Mirrors reflect the nearby mountains.

This need for political appeal presents a major challenge to advocates of defense-led energy innovation, because the political consensus for energy innovation is much weaker than the one for military innovation. Some prominent political leaders, notably Sen. John McCain, have very publicly questioned whether it is appropriate for the DOD to pay attention to energy innovation, which they view as a distraction from the DOD’s primary interest in improved warfighting performance. McCain wrote a letter to the Secretary of the Navy, Ray Mabus, in July 2012, criticizing the Navy’s biofuels initiative by pointedly reminding Secretary Mabus, “You are the Secretary of the Navy, not the Secretary of Energy.” Moreover, although almost all Americans agree that the extreme performance of innovative weapons systems is a good thing (Americans expect to fight with the very best equipment), government support for energy innovation, especially energy innovation intended to reduce greenhouse gas emissions, faces strong political headwinds. In some quarters, ideological opposition to policies intended to reduce climate change is as strong as the historically important ideological support for military investment in areas like missile defense.

48
Jamey Stillings #10995, 4 September 2013. Fine art archival print. Solar flux testing, Solar Field 1.

The defense industry also provides a key link in assembling the political support for military innovation that may be hard to replicate for defense-led energy innovation. The prime contractors take charge of directly organizing district-level political support for the defense acquisition budget. To be funded, a major defense acquisition project needs to fit into a contractor-led political strategy. The prime contractors, as part of their standard responsiveness to their military customers, almost instantly develop a new set of briefing slides to tout how their products will play an essential role in executing whatever new strategic concept or buzzword comes from the Pentagon. And their lobbyists will make sure that all of the right congressional members and staffers see those slides. But those trusted relationships are built on understanding defense technology and on connections to politicians interested in defense rather than in energy. There may be limits to the defense lobbyists’ ability to redeploy as supporters of energy innovation.

49
Jamey Stillings #7738, 4 June 2012. Fine art archival print. View of construction of the dry cooling system of Solar Field 1.

Other unusual features of the defense market reinforce the especially strong and insular relationship between military customers and established suppliers. Their relationship is freighted with strategic jargon and security classification. Military suppliers are able to translate the language in which the military describes its vision of future combat into technical requirements for systems engineering, and the military trusts them to temper optimistic hopes with technological realism without undercutting the military’s key objectives. Military leaders feel relatively comfortable informally discussing their half-baked ideas about the future of warfare with established firms, ideas that can flower into viable innovations as the military officers go back and forth with company technologists and financial officers. That iterative process has given the U.S. military the best equipment in the world in the past, but it tends to limit the pool of companies to the usual prime contractors: Lockheed Martin, Boeing, Northrop Grumman, Raytheon, General Dynamics, and BAe Systems. Those companies’ core competency is in dealing with the unique features of the military customer.

Jargon and trust are not the only important features of that customer-supplier relationship. Acquisition regulations also specify high levels of domestic content in defense products, regardless of the cost; that a certain fraction of each product will be built by small businesses and minority- and women-owned companies, regardless of their ability to win subcontracts in fair and open competition; and that defense contractors will comply with an extremely intrusive and costly set of audit procedures to address the threat of perceived or very occasionally real malfeasance. These features of the defense market cannot be wished away by reformers intent on reducing costs: Each part of the acquisition system has its defenders, who think that the social goal or protection from scandal is worth the cost. The defense market differs from the broader commercial market in the United States on purpose, not by chance. Majorities think that the differences are driven by good reasons.

The implication is that the military has to work with companies that are comfortable with the terms and conditions of working for the government. That constraint limits the pool of potential defense-led energy innovators. It would also hamper the ability to transfer any defense-led energy innovations to the commercial market, because successful military innovations have special design features and extra costs built into their value chain.

In addition to their core competency in understanding the military customer, defense firms, like most other companies, also have technological core competencies. In the 1990s and 2000s, it was fashionable in some circles to call the prime contractors’ core competency “systems integration,” as if that task could be performed entirely independently from a particular domain of technological expertise. In one of the more extreme examples, Raytheon won the contract as systems integrator for the LPD-17 class of amphibious ships, despite its lack of experience as a shipbuilder. Although Raytheon had for years led programs to develop highly sophisticated shipboard electronics systems, the company’s efforts to lead the team building the entire ship contributed to an extremely troubled program. In this example, company and customer both got carried away with their technological optimism and their emphasis on contractor responsiveness. In reality, the customer-supplier relationship works best when it calls for the company to develop innovative products that follow an established trajectory of technological performance, where the supplier has experience and core technical capability. Defense companies are likely to struggle if they try to contribute to technological trajectories related to energy efficiency or reduced greenhouse gas emissions, trajectories that have not previously been important in defense acquisition.

50
Jamey Stillings #11060, 4 September 2013. Fine art archival print. View north of Solar Fields 2 and 3.

That is not to say that the military cannot introduce new technological trajectories into its acquisition plans. In fact, the military’s emphasis on its technological edge has explicitly called for disruptive innovation from time to time, and the defense industry has responded. For example, the electronics revolution involved huge changes in technology, shifting from mechanical to electrical devices and from analog to digital logic, requiring support from companies with very different technical core competencies. Startup companies defined by their intellectual property, though, had little insight (or desire) to figure out the complex world of defense contracting—the military jargon, the trusted relationships, the bureaucratic red tape, and the political byways—so they partnered with established prime contractors. Disruptive innovators became subcontractors, formed joint ventures, or sold themselves to the primes. The trick is for established primes to serve as interfaces and brokers to link the military’s demand pull with the right entrepreneurial companies with skills and processes for the new performance metrics. Recently, some traditional aerospace prime contractors, led by Boeing and Northrop Grumman, have used this approach to compete in the market for unmanned aerial vehicles. Perhaps they could do the same in the area of energy innovation.

What the military customer wants

Given the pattern of customer-driven innovation in defense, the task confronting advocates of defense-driven energy innovation seems relatively simple: Inject energy concerns into the military requirements process. If they succeed, then the military innovation route might directly address key barriers that hamper the normal commercial process of developing energy technologies. With the military’s interest, energy innovations might find markets that promise a high enough rate of return to justify the investment, and energy companies might convince financiers to stick with projects through many lean years and false starts before they reach technological maturity, commercial acceptance, and sufficient scale to earn profits.

The first step is to understand the customers’ priorities. From the perspective of firms that actually develop and sell new defense technologies, potential customers include the military services with their various components, each with a somewhat different level of interest in energy innovation.

Military organizations decide the emphasis in the acquisition budget. They make the case, ideally based on military professional judgment, for the kinds of equipment the military needs most. They also determine the systems’ more detailed requirements, such as the speed needed by a front-line fighter aircraft and the type(s) of fuel that aircraft should use. They may, of course, turn out to be wrong: Strategic threats may suddenly change, some technological advantages may not have the operational benefits that military leaders expected, or other problems could emerge in their forecasts or judgments. Nevertheless, these judgments are extremely influential in defining acquisition requirements. Admitting uncertainty about requirements often delays a program: Projects that address a “known” strategic need get higher priority from military leaders and justify congressional spending more easily.

Not surprisingly, military buyers almost always want a lot of things. When they set the initial requirements, before the budget and technological constraints of actual program execution, the list of specifications can grow very long. Even though the process in principle recognizes the need for tradeoffs, there is little to force hard choices early in the development of a new military technology. Adding an energy-related requirement would not dramatically change the length of the list. But when the real spending starts and programs come up for evaluation milestones, the Services inevitably need to drop some of the features that they genuinely desired. Relevance to the organizations’ critical tasks ultimately determines the emphasis placed on different performance standards during those difficult decisions. Even performance parameters that formally cannot be waived, like those specified in statute, may face informal pressure for weak enforcement. Programs can sometimes get a “Gentleman’s C” that allows them to proceed, subordinating a goal that the buying organization thinks is less important.

Energy technology policy advocates looking for a wealthy investor to transform the global economy probably ask too much of the DOD.

For example, concerns about affordability and interoperability with allies’ systems have traditionally received much more rhetorical emphasis early in programs’ lives than actual emphasis in program execution. When faced with the question of whether to put the marginal dollar into making the F-22 stealthy and fast or into giving the F-22 extensive capability to communicate, especially with allies, the program office not surprisingly emphasized the former key performance parameters rather than the latter nice feature.

Given that military leaders naturally emphasize performance that responds directly to strategic threats, and that they are simultaneously being encouraged by budget austerity to raise the relative importance of affordability in defense acquisition decisions, energy performance seems more likely to end up like interoperability than like stealth in the coming tradeoff deliberations. In a few cases, the energy-related improvements will directly improve combat performance or affordability, too, but those true “win-win” solutions are not very common. If they were, there would be no appeals for special priority for energy innovation.

The recent case of the ADVENT jet engine program shows the difficulty. As the military begins procurement of the F-35 fighter for the Air Force, Navy, and Marine Corps as well as for international sales, everyone agrees that having two options for the engine would be nice. If Pratt & Whitney’s F-135 engine runs into unexpected production or operational problems, a second engine would be available as a backup, and competition between the two engines would presumably help control costs and might stimulate further product improvement. However, the military decided that the fixed cost of paying GE to develop and manufacture a second engine would be too high to be justified even for a market as enormous as the F-35 program. The unofficial political compromise was to start a public-private partnership with GE and Rolls Royce called ADVENT, which would develop the next generation of fighter engine that might compete to get onto F-35 deliveries after 2020. ADVENT’s headline target for performance improvement is a 25% reduction in specific fuel consumption, which would reduce operating costs and, more important, would increase the F-35’s range and its ability to loiter over targets, directly contributing to its warfighting capabilities, especially in the Pacific theater, where distances between bases and potential targets are long. Although this increase in capability seems particularly sensible, given the announced U.S. strategy of “rebalancing” our military toward Asia, the Air Force has struggled to come up with its share of funding for the public/private partnership and has hesitated to prepare for a post-2020 competition between the new engine and the now-established F-135. The Air Force may have enough to worry about trying to get the first engine through test and evaluation, and paying the fixed costs of a future competitor still seems like a luxury in a time of budget constraint. Countless potential energy innovations have much weaker strategic logic than the ADVENT engine, and if ADVENT has trouble finding a receptive buyer, the others are likely to have much more trouble.

Of course, military culture also offers some hopeful points for the energy innovation agenda. For example, even if energy innovation adds complexity to military logistics in managing a mix of biofuels, or generating and storing distributed power rather than using standardized large-capacity diesel generators, the military is actually good at dealing with complexity. The Army has always moved tons of consumables and countless spare parts to the front to feed a vast organization of many different communities (infantry, armor, artillery, aviation, etc.). The Navy’s power projection capability is built on a combination of carefully planning what ships need to take with them with flexible purchasing overseas and underway replenishment. The old saw that the Army would rather plan than fight may be an exaggeration, but it holds more than a grain of truth, because the Army is genuinely good at planning. More than most organizations, the U.S. military is well prepared to deal with the complexity that energy innovation and field experimentation will inject into its routines. Even if the logistics system seems Byzantine and inefficient, the military’s organizational culture does not have antibodies against the complexity that energy innovation might bring.

52
Jamey Stillings #11590, 5 September 2013. Fine art archival print. Solar flux testing, Solar Field 3.

Who will support military-led innovation?

The potential for linking energy innovation to the DOD’s core mission seems especially important and exciting right now, because of the recent experience at war, and even more than that, because the recent wars happen to have involved a type of fighting with troops deployed to isolated outposts far from their home bases, in an extreme geography that stressed the logistics system. But as the U.S. effort in Afghanistan draws down, energy consumption in operations will account for less of total energy consumption, meaning that operational energy innovations will have less effect on energy security. More important, operational energy innovations will be of less interest to the military customers, who according to the 2012 Strategic Guidance are not planning for a repeat of such an extreme situation as the war in Afghanistan. Even if reality belies their expectations (after all, they did not expect to deploy to Afghanistan in 2001, either) acquisition investments follow the ex ante plans, not the ex post reality.

Specific military organizations that have an interest in preparing to fight with a light footprint in austere conditions may well continue the operational energy emphasis of the past few years. The good news for advocates of military demand pull for energy innovation is that special operations forces are viewed as the heroes of the recent wars, making them politically popular. They also have their own budget lines that are less likely to be swallowed by more prosaic needs such as paying for infrastructure at a time of declining defense budgets. While the conventional military’s attention moves to preparation against a rising near-peer competitor in China (a possible future, if not the only one, for U.S. strategic planning), special operations may still want lightweight powerful batteries and solar panels to bring power far off the grid. Even if a lot of special operations procurement buys custom jobs for highly unusual missions, the underlying research to make special operations equipment may also contribute to wider commercial uses such as electric cars and distributed electricity generation, if not to other challenges like infrastructure-scale energy storage and grid integration of small-scale generators.

53
Jamey Stillings #9395, 21 March 2013. Fine art archival print. Sunrise, view to the southeast of Solar Fields 3, 2, and1.

Working with industry for defense-led energy innovation requires treading a fine line. Advocates need to understand the critical tasks facing specific military organizations, meaning that they have to live in the world of military jargon, strategic thinking, and budget politics. At the same time, the advocates need to be able to reach nontraditional suppliers who have no interest in military culture but are developing technologies that follow performance trajectories totally different from those of the established military systems. More likely, it will not be the advocates who will develop the knowledge to bridge the two groups, their understandings of their critical tasks, and the ways they communicate and contract. It will be the DOD’s prime contractors, if their military customers want them to respond to a demand for energy innovation.

Defense really does need some new energy technologies, ranging from fuel-efficient jet engines to easily rechargeable lightweight batteries, and the DOD is likely to find some money for particular technologies. Those technologies may also make a difference for the broader energy economy. But energy technology policy advocates looking for a wealthy investor to transform the global economy probably ask too much of the DOD. Military innovations that turn out to have huge commercial implications—innovations such as the Internet and the Global Positioning System—do not come along very often, and it takes decades before their civilian relatives are well understood and widely available. The military develops these products because of its own internal needs, driven by military judgment, congressional budget politics, and the core competencies of defense-oriented industry.

In a 2014 report, the Pew Project on National Security, Energy and Climate Change blithely discussed the need to “chang[e] the [military] culture surrounding how energy is generated and used….” Trying to change the way the military works drives into the teeth of military and political resistance to defense-led energy innovation. Changing the culture might also undermine the DOD’s ability to innovate; after all, one of the key reasons why Pew and others are interested in using the defense acquisition apparatus for energy innovation is that mission-focused technology development at the DOD has been so successful in the past. Better to focus defense-led energy innovation efforts on projects that actually align with military missions rather than stretching the boundaries of the concept and weakening the overall effort.

Recommended reading

Thomas P. Erhard, Air Force UAVs: The Secret History (Arlington, VA: Mitchell Institute for Airpower Studies, July 2010).

Eugene Gholz, “Eisenhower versus the Spinoff Story: Did the Rise of the Military-Industrial Complex Hurt or Help America’s Commercial Competitiveness?” Enterprise and Society 12, no. 1 (March 2011).

Dwight R. Lee, “Public Goods, Politics, and Two Cheers for the Military-Industrial Complex,” in Robert Higgs, ed., Arms, Politics, and the Economy: Historical and Contemporary Perspectives (New York, NY: Holmes & Meier, 1990), pp. 22–36.

Thomas L. McNaugher, New Weapons, Old Politics: America’s Military Procurement Muddle (Washington, DC: Brookings Institution, 1989).

David C. Mowery, “Defense-related R&D as a model for ‘Grand Challenges’ technology policies,” Research Policy 41, no. 10 (December 2012).

Report of the Defense Science Board Task Force on DoD Energy Strategy: “More Fight–Less Fuel” (Washington, DC: Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics, February 2008).

Harvey M. Sapolsky, Eugene Gholz, and Caitlin Talmadge, US Defense Politics: The Origins of Security Policy (London, UK: Routledge, Revised and Expanded 2nd edition, 2013).

Eugene Gholz (egholz@alum.mit.edu) is an associate professor at the LBJ School of Public Affairs of The University of Texas at Austin.

No Time for Pessimism about Electric Cars

JOHN D. GRAHAM
JOSHUA CISNEY
SANYA CARLEY
JOHN RUPP

The national push to adopt electric cars should be sustained until at least 2017, when a review of fed auto policies is scheduled.

A distinctive feature of U.S. energy and environmental policy is a strong push to commercialize electric vehicles (EVs). The push began in the 1990s with California’s Zero Emission Vehicle (ZEV) program, but in 2008 Congress took the push nationwide through the creation of a $7,500 consumer tax credit for qualified EVs. In 2009 the Obama administration transformed a presidential campaign pledge into an official national goal: putting one million plug-in electric vehicles on the road by 2015.

A variety of efforts has promoted commercialization of EVs. Through a joint rulemaking, the Department of Transportation and the Environmental Protection Agency are compelling automakers to surpass a fleet-wide average of 54 miles per gallon for new passenger cars and light trucks by model year 2025. Individual manufacturers, which are considered unlikely to meet the standards without EV offerings, are allowed to count each qualified EV as two vehicles instead of one in near-term compliance calculations.

The U.S. Department of Energy (DOE) is actively funding research, development, and demonstration programs to improve EV-related systems. Loan guarantees and grants are also being used to support the production of battery packs, electric drive-trains, chargers, and the start-up of new plug-in vehicle assembly plants. The absence of a viable business model has slowed the growth of recharging infrastructure, but governments and companies are subsidizing a growing number of public recharging stations in key urban locations and along some major interstate highways. Some states and cities have gone further by offering EV owners additional cash incentives, HOV-lane access, and low-cost city parking.

Private industry has responded to the national EV agenda. Automakers are offering a growing number of plug-in EV models (three in 2010; seventeen in 2014), some that are fueled entirely by electricity (battery-operated electric vehicles, or BEVs) and others that are fueled partly by electricity and partly by a back-up gasoline engine (plug-in hybrids, or PHEVs). Coalitions of automakers, car dealers, electric utilities, and local governments are working together in some cities to make it easy for consumers to purchase or lease an EV, to access recharging infrastructure at home, in their office or in their community, and to obtain proper service of their vehicle when problems occur. Government and corporate fleet purchasers are considering EVs while cities as diverse as Indianapolis and San Diego are looking into EV sharing programs for daily vehicle use. Among city planners and utilities, EVs are now seen as playing a central role in “smart” transportation and grid systems.

The recent push for EVs is hardly the market-oriented approach to innovation that would have thrilled Milton Friedman. It resembles somewhat the bold industrial policies in the post-World War II era that achieved some significant successes in South Korea, Japan, and China. Although the U.S. is a market-oriented economy, it is difficult to imagine that the U.S. successes in aerospace, information technology, nuclear power, or even shale gas would have occurred without a supportive hand from government. In this article, we make a pragmatic case for stability in federal EV policies until 2017, when a large body of real-world experience will have been generated and when a midterm review of federal auto policies is scheduled.

Laurence Gartel and Tesla Motors

Digital artist Laurence Gartel collaborated with Tesla Motors to transform the electric Tesla Roadster into a work of art by wrapping the car’s body panels in bold colorful vinyl designed by the artist. The Roadster was displayed and toured around Miami Beach during Miami’s annual Art Basel festival in 2010.

Gartel, an artist who has experimented with digital art since the 1970s, was a logical collaborator with Tesla given his creative uses of technology. He graduated from the School of Visual Arts, New York, in 1977, and has pursued a graphic style of digital art ever since. His experiments with computers, starting in 1975, involved the use of some of the earliest special effects synthesizers and early video paint programs. Since then, his work has been exhibited at the Museum of Modern Art; Long Beach Museum of Art; Princeton University Art Museum; MoMA PS 1, New York City; and the Norton Museum of Art, West Palm Beach, Florida. His work is in the collections of the Smithsonian Institution’s National Museum of American History and the Bibliotheque Nationale, Paris.

34
Image courtesy of the artist.

Governmental interest in EVs

The federal government’s interest in electric transportation technology is rooted in two key advantages that EVs have over the gasoline- or diesel-powered internal combustion engine. Since the advantages are backed by an extensive literature, we summarize them only briefly here.

First, electrification of transport enhances U.S. energy security by replacing dependence on petroleum with a flexible mixture of electricity sources that can be generated within the United States (e.g. natural gas, coal, nuclear power, and renewables). The U.S. is making rapid progress as an oil producer, which enhances security, but electrification can further advance energy security by curbing the nation’s high rate of consumption in the world oil market. The result: less global dependence on energy from OPEC producers, unstable regimes in the Middle East, and Russia.

Second, electrification of transport is more sustainable on a life-cycle basis because it causes a net reduction in local air pollution and greenhouse gas emissions, an advantage that is expected to grow over time as the U.S. electricity mix shifts toward more climate-friendly sources such as gas, nuclear, and renewables. Contrary to popular belief, an electric car that is powered by coal-fired electricity is still modestly cleaner from a greenhouse gas perspective than a typical gasoline-powered car. And EVs are much cleaner if the coal plant is equipped with modern pollution controls for localized pollutants and if carbon capture and storage (CCS) technology is applied to reduce carbon dioxide emissions. Since the EPA is already moving to require CCS and other environmental controls on coal-fired power plants, the environmental case for plug-in vehicles will only become stronger over time.

Although the national push to commercialize EVs is less than six years old, there have been widespread claims in the mainstream press, on drive-time radio, and on the Internet that the EV is a commercial failure. Some prominent commentators, including Charles Krauthammer, have suggested that the governmental push for EVs should be reconsidered.

It is true that many mainstream car buyers are unfamiliar with EVs and are not currently inclined to consider them for their next vehicle purchase. Sales of the impressive (and pricy) Tesla sports car (Model S) have been better than the industry expected, but ambitious early sales goals for the Nissan Leaf (a BEV) and the Chevrolet Volt (a PHEV) have not been met. General Electric Corporation backed off of an original pledge to purchase 25,000 EVs. Several companies with commercial stakes in batteries, EVs, or chargers have gone bankrupt, despite assistance from the federal government.

“Early adopters” of plug-in vehicles are generally quite enthusiastic about their experiences, but mainstream car buyers remain hesitant. There is much skepticism in the industry about whether EVs will penetrate the mainstream new-vehicle market or simply serve as “compliance cars” for California regulators or become niche products for taxi and urban delivery fleets.

One of the disadvantages of EVs is that they are currently more costly to produce than comparably sized gasoline and diesel powered vehicles. The cost premium today is about $10,000-$15,000 per vehicle, primarily due to the high price of lithium ion battery packs. The cost disadvantage has been declining over time due to cost-saving innovations in battery-pack design and production techniques but there is a disagreement among experts about how much and how fast production costs will decline in the future.

On the favorable side of the affordability equation, EVs have a large advantage in operating costs: electricity is about 65% cheaper than gasoline on an energy-equivalent basis, and most analysts project that the price of gasoline in the United States will rise more rapidly over time than the price of electricity. Additionally, repair and maintenance costs are projected to be significantly smaller for plug-in vehicles than gasoline vehicles. When all of the private financial factors are taken into account, the total cost of ownership throughout the lifetime of the EV is comparable—or even lower—than a gasoline vehicle, and that advantage can be expected to enlarge as EV technology matures.

Trends in EV sales

Despite the financial, environmental, and security advantages of the EV, early sales have not matched initial hopes. Nissan and General Motors led the high-volume manufacturers with EV offerings but have had difficulty generating sales, even though auto sales in the United States were steadily improving from 2010 through 2013, the period when the first EVs were offered. In 2013 EVs accounted for only about 0.2% of the 16 million new passenger vehicles sold in the U.S.

Nissan-Renault has been a leader. At the 2007 Tokyo Motor Show, Nissan shocked the industry with a plan to leapfrog the gasoline-electric hybrid with a new mass-market BEV, called the Fluence in France and the Leaf in the U.S. Nissan’s business plan called for EV sales of 100,000 per year in the U.S. by 2012, and Nissan was awarded a $1.6 billion loan guarantee by DOE to build a new facility in Smyrna, Tennessee to produce batteries and assemble EVs. The company had plans to sell 1.5 million EVs on a global basis by 2016 but, as of late 2013, had sold only 120,000 and acknowledged that it will fall short of its 2016 global goal by more than 1 million vehicles.

General Motors was more cautious than Nissan, planning production in the US of 10,000 Volts in 2011 and 60,000 in 2012. However, neither target was met. GM did “re-launch” the Volt in early 2012 after addressing a fire-safety concern, obtaining HOV-lane access in California for Volt owners, cutting the base price, and offering a generous leasing arrangement of $350 per month for 36 months of use. Volt sales rose from 7,700 in 2011 to 23,461 in 2012 and 23,094 in 2013.

The most recent full-year U.S. sales data (2013) reveal that the Volt is now the top-selling plug-in vehicle in the U.S., followed by the Leaf (22,610), the Tesla Model S (18,000), and the Toyota Prius Plug-In (12,088). In the first six months of 2014, EV sales are up 33% over 2013, led by the Nissan Leaf and an impressive start from the Ford Fusion Energi PHEV. Although the sales at Tesla have slowed a bit, the company has announced plans for a new $5 billion plant in the southwest of the U.S. to produce up to 500,000 vehicles for distribution worldwide.

President Obama, in 2009 and again in his January 2011 State of the Union address, set the ambitious goal of putting one million plug-in vehicles on the road by 2015. Two years after the address, DOE and the administration dropped the national 2015 goal, recognizing that it was overly ambitious and would take longer to achieve. But does this refinement of a federal goal really prove that EVs are a commercial failure? We argue that it does not, pointing to two primary lines of evidence: a historical comparison of EV sales with conventional hybrid sales; and a cross-country comparison of U.S. EV sales with German EV sales.

Comparison with the conventional hybrid

A conventional hybrid, defined as a gasoline-electric vehicle such as the Toyota Prius, is different from a plug-in vehicle. Hybrids vary in their design, but they generally recharge their batteries during the process of braking (“regenerative braking”) or, if the brakes are not in use during highway cruising, from the power of the gasoline engine. Thus, a conventional hybrid cannot be plugged in for charging and does not draw electricity from the grid.

Cars with hybrid engines are also more expensive to produce than gasoline cars, primarily because they have two propulsion systems. For a comparably-sized vehicle, the full hybrid costs $4,000 to $7,500 more to produce than a gasoline version. But the hybrid buyer can expect 30% better fuel economy and fewer maintenance and repair costs than a gasoline-only engine. According to recent life-cycle and cost-benefit analyses, conventional hybrids compare favorably to the current generation of EVs.

Toyota is currently the top seller of hybrids, offering 22 models globally that feature the gasoline-electric combination. To date, the Prius has sold over 3 million vehicles worldwide, and has recently expanded to an entire family of models. In 2013, U.S. sales of the Prius were 234,228, of which 30% were registered in the State of California, where the Prius was the top-selling vehicle line in both 2012 and 2013.

The success of the Prius did not occur immediately after introduction. Toyota and Honda built on more than a decade of engineering research funded by DOE and industry. Honda was actually the first company to offer a conventional hybrid in the U.S.—the Insight Hybrid in 1999—but Toyota soon followed in 2000 with the more successful Prius. Ford followed with the Escape Hybrid SUV. The experience with conventional hybrids underscores the long lead times in the auto industry, the multiyear process of commercialization, and the conservative nature of the mainstream U.S. car purchaser.

Fifteen years ago, critics of conventional hybrids argued that the fuel savings would not be enough to justify the cost premium of two propulsion systems, that the batteries would deteriorate rapidly and require expensive replacement, that resale values for hybrids would be discounted, that the batteries might overheat and create safety problems, and that hybrids were practical only for small, light-weight cars. “Early adopters” of the Prius, which carried a hefty price premium for a small car, were often wealthy, highly educated buyers who were attracted to the latest technology or wanted to make a pro-environment statement with their purchase. The process of expanding hybrid sales from early adopters to mainstream consumers took many years to occur, and that process continues today, fifteen years later.

When the EV and the conventional hybrid are compared according to the pace of market penetration in the United States, the EV appears to be more successful (so far). Figure 1 illustrates this comparison by plotting the cumulative number of vehicles sold—conventional hybrids versus EVs—during the first 43 months of market introduction. At month 25, EV sales were about double the number of hybrid sales; at month 40 the ratio of cumulative EV sales to cumulative hybrid sales was about 2.2. The overall size of the new passenger-vehicle market was roughly equal in the two time periods.

When comparing the penetration rates of hybrids and EVs, it is useful to highlight some of the differences in the technologies, policies, and economic environments. The plug-in aspect of the EV calls for a much larger change in the routine behavior of motorists (e.g., nighttime and community charging) than does the conventional hybrid. The early installations of 220-volt home charging stations, which reduce recharging time from 12-18 hours to 3-4 hours, were overly expensive, time-consuming to set up with proper permits, and an irritation to early adopters of EVs. Moreover, the EV owner is more dependent on the decisions of other actors (e.g., whether employers or shopping malls supply charging stations and whether the local utility offers low electricity rates for nighttime charging) than is the hybrid owner.

The success of the conventional hybrid helped EVs get started by creating an identifiable population of potential early adopters that marketers of the EV have exploited. Now, one of the significant predictors of EV ownership is prior ownership of a conventional hybrid. Some of the early EV owners immediately gained HOV access in California, but Prius owners were not granted HOV lane access until 2004, several years after market introduction. California phased out HOV access for hybrids from 2007 to 2011 and now awards the privilege to qualified EV owners.

From a financial perspective, the purchase of the conventional hybrid and EV were not equally subsidized by the government. EV purchasers were enticed by a $7,500 federal tax credit; the tax deduction—and later credit—for conventional hybrid ownership was much smaller, at less than $3,200. Some states (e.g., California and Colorado) supplemented the $7,500 federal tax credit with $1,000 to $2,500 credits (or rebates) of their own for qualified EVs; few conventional hybrid purchasers were provided a state-level purchase incentive. Nominal fuel prices were around $2 per gallon but rising rapidly in 2000-2003, the period when the hybrid was introduced to the U.S. market; fuel prices were volatile and in the $3-$4 per gallon range from 2010-2013 when EVs were initially offered. The roughly $2,000 cost of a Level 2 (220-volt) home recharging station (equipment plus labor for installation) was for several years subsidized by some employers, utilities, government grants, or tax credits. Overall, financial inducements to purchase an EV from 2010 to 2013 were stronger than the inducements for a conventional hybrid from 2000 to 2003, possibly helping explain why the take-up of EVs has been faster.

Comparison with Germany

Another way to assess the success of EV sales in the United States since 2010 is to compare it to another country where EV policies are different. Germany is an interesting comparator because it is a prosperous country with a strong “pro-environment” tradition, a large and competitive car industry, and relatively high fuel prices of $6-$8 per gallon due to taxation. Electricity prices are also much higher in Germany than the U.S. due to an aggressive renewables policy.

Like President Barack Obama, German Prime Minister Angela Merkel has set a goal of putting one million plug-in vehicles on the road, but the target date in Germany is 2020 rather than 2015. Germany has also made a large public investment in R&D to enhance battery technology and a more modest investment in community-based demonstrations of EV technology and recharging infrastructure.

On the other hand, Germany has decided against instituting a large consumer tax credit similar to the €10,000 “superbonus” for EVs that is available in France. Small breaks for EV purchasers in Germany are offered on vehicle sales taxes and registration fees. Nothing equivalent to HOV-lane access is offered to German EV users yet. Germany also offers few subsidies for production of batteries and electric drivetrains and no loan guarantees for new plants to assemble EVs.

Since the German car manufacturers are leaders in the diesel engine market, the incentive for German companies to explore radical alternatives to the internal combustion engine may be tempered. Also, German engineers appear to be more confident in the long-term promise of the hydrogen fuel cell than in cars powered by lithium ion battery packs. Even the conventional hybrid engine has been slow to penetrate the German market, though there is some recent interest in diesel-electric hybrid technology. Daimler and Volkswagen have recently begun to offer EVs in small volumes but the advanced EV technology in BMW’s “i” line is particularly impressive.

FIGURE 1

37

Another key difference between Germany and the U.S. is that Germany has no regulation similar to California’s Zero Emission Vehicle (ZEV) program. The latest version of the ZEV mandate requires each high-volume manufacturer doing business in California to offer at least 15% of their vehicles as EVs or fuel cells by 2025. Some other states (including New York), which account for almost a quarter of the auto market, have joined the ZEV program. The ZEV program is a key driver of EV offerings in the US. In fact, some global vehicle manufacturers have stated publicly that, were it not for the ZEV program, they might not be offering plug-in vehicles to consumers. Since the EU’s policies are less generous to EVs, some big global manufacturers are focusing their EV marketing on the West Coast of the U.S. and giving less emphasis to Europe.

Overall, from 2010 to 2013 Germany has experienced less than half of the market-share growth in EV sales than has occurred in the U.S. The difference is consistent with the view that the policy push in the U.S. has made a difference. The countries in Europe where EVs are spreading rapidly (Norway and the Netherlands) have enacted large financial incentives for consumers coupled with coordinated municipal and utility policies that favor EV purchase and use.

Addressing barriers to adoption of EVs

The EV is not a static technology but a rapidly evolving technological system that links cars with utilities and the electrical grid. Automakers and utilities are addressing many of the barriers to more widespread market diffusion, guided by the reactions of early adopters.

Acquisition cost. The price premium for an EV is declining due to savings in production costs and price competition within the industry. Starting in February 2013, Nissan dropped the base price of the Leaf from $35,200 to $28,800 with only modest decrements to base features (e.g., loss of the telematics system). Ford and General Motors responded by dropping the prices of the Focus Electric and Volt by $4,000 and $5,000, respectively. Toyota chipped in with a $4,620 price cut on the plug-in version of the Toyota Prius (now priced under $30,000), but it is eligible for only a $2,500 federal tax credit. And industry analysts report that the transaction prices for EVs are running even lower than the diminished list prices, in part due to dealer incentives and attractive financing deals.

Dealers now emphasize affordable leasing arrangements, with a majority of EVs in the U.S. acquired under leasing deals. Leasing allays consumer concerns that the batteries may not hold up to wear and tear, that resale values of EVs may plummet after purchase (a legitimate concern), and that the next generation of EVs may be vastly improved compared to current offerings. Leasing deals for under $200 per month are available for the Chevy Spark EV, the Fiat 500e, the Leaf, Daimler’s Smart For Two EV; lease rates for the Honda Fit EV, the Volt and the Ford Focus EV are between $200 and $300 per month. Some car dealers offer better deals than the nationwide leasing plans provided by vehicle manufacturers.

Driving range. Consumer concerns about limited driving range—80-100 miles for most EVs, though the Tesla achieves 200-300 miles per charge—are being addressed in a variety of ways. PHEVs typically have maximum driving ranges that are equal to (or better than) a comparable gasoline car, and a growing body of evidence suggests that PHEVs may attract more retail customers than BEVs. For consumers interested in BEVs, some dealers are also offering free short-term use of gasoline vehicles for long trips when the BEV has insufficient range. The upscale BMW i3 EV is offered with an optional gasoline engine for $3,850 that replenishes the battery as it runs low; the effective driving range of the i3 is thus extended from 80-100 miles to 160-180 miles.

Recharging time. Some consumers believe that the 3-4 hour recharging time with a Level 2 charger is too long. Use of super-fast Level 3 chargers can accomplish an 80% charge in about 30 minutes, although inappropriate use of Level 3 chargers can potentially damage the battery. In the crucial West Coast market, where consumer interest in EVs is the highest, Nissan is subsidizing dealers to make Level 3 chargers available for Leaf owners. BMW is also offering an affordable Level 3 charger. State agencies in California, Oregon, and Washington are expanding the number of Level 2 and Level 3 chargers available along interstate highways, especially Interstate 5, which runs from the Canadian to the Mexican borders.

As of 2013, a total of 6,500 Level 2 and 155 Level 3 charging stations were available to the U.S. public. Some station owners require users to be a member of a paid subscription plan. Tesla has installed 103 proprietary “superchargers” for its Model S that allow drivers to travel across the country or up and down both coasts with only modest recharging times. America’s recharging infrastructure is tiny compared to the 170,000 gasoline stations, but charging opportunities are concentrated in areas where EVs are more prevalent, such as southern California, San Francisco, Seattle, Dallas-Fort Worth, Houston, Phoenix, Chicago, Atlanta, Nashville, Chattanooga, and Knoxville.

Advanced battery and grid systems. R&D efforts to find improved battery systems have intensified. DOE recently set a goal of reducing the costs of battery packs and electric drive systems by 75% by 2022, with an associated 50% reduction in the current size and weight of battery packs. Whether DOE’s goals are realistic is questionable. Toyota’s engineers believe that by 2025 improved solid-state and lithium air batteries will replace lithium ion batteries for EV applications. The result will be a three- to five-fold rise in power at a significantly lower cost of production due to use of fewer expensive rare earths. Lithium-sulfur batteries may also deliver more miles per charge and better longevity than lithium ion batteries.

Researchers are also exploring demand side management of the electrical grid with “vehicle-to-grid” (V2G) technology. This innovation could enable electric car owners to make money by storing power in their vehicles for later use by utilities on the grid. It might cost an extra $1,500 to fit a V2G-enabled battery and charging system to a vehicle but the owner might recoup $3,000 per year from a load-balancing contract with the electric utility. It is costly for utilities to add storage capacity; the motorist already needs the battery for times when the vehicle is in use, so a V2G contract might allow for optimal use of the battery.

Low-price electricity and EV sharing. Utilities and state regulators are also experimenting with innovative charging schemes that will favor EV owners who charge their vehicles at times when electricity demand is low. Mandatory time-of-use pricing has triggered adverse public reactions but utilities are making progress with more modest, incentive-based pricing schemes that favor nighttime and weekend charging. Atlanta is rapidly becoming the EV capital of the southern United States, in part because Georgia’s utilities offer ultra-low electricity prices to EV owners.

A French-based company has launched electric-car sharing programs in Paris and Indianapolis. Modeled after bicycle sharing, consumers can rent an e-car for several hours or an entire day if they need a vehicle for multiple short trips in the city. The vehicle can be accessed with your credit card and returned at any of multiple points in the city. The commercial success of EV sharing is not yet demonstrated, but sharing schemes may play an important role in raising public awareness of the advancing technology.

The EV’s competitors

The future of the EV would be easier to forecast if the only competitor were the current version of the gasoline engine. History suggests, however, that unexpected competitors can emerge that change the direction of consumer purchases.

The EV is certainly not a new idea. In the 1920s, the United States was the largest user of electric cars in the world, and more electric than gasoline-powered cars were sold. Actually, steam-powered cars were among the most popular offerings in that era.

EVs and steam-powered cars lost out to the internal combustion engine for a variety of reasons. Discovery of vast oil supplies made gasoline more affordable. Mass production techniques championed by Henry Ford dropped the price of a gasoline car more rapidly than the price of an electric car. Public investments in new highways connected cities, increased consumer demand for vehicles with long driving range, and therefore reduced the relative appeal of range-limited electric cars, whose value was highest for short trips inside cities. And car engineers devised more convenient ways to start a gasoline-powered vehicle, which caused them to be more appealing to female as well as male drivers. By the 1930s, the electric car lost its place in the market and did not return for many decades.

Looking to the future, it is apparent that EVs will confront intensified competition in the global automotive market. The vehicles described in Table 1 are simply an illustration of the competitive environment.

Vehicle manufacturers are already marketing cleaner gasoline engines (e.g., Ford’s “EcoBoost” engines with turbochargers and direct-fuel injection) that raise fuel economy significantly at a price premium that is much less than the price premium for a conventional hybrid or EV. Clean diesel-powered cars, which have already captured 50% of the new-car market in Europe, are beginning to penetrate the U.S. market for cars and pick-up trucks. Toyota argues that an unforeseen breakthrough in battery technology will be required to enable a plug-in vehicle to match the improving cost-effectiveness of a conventional hybrid.

Meanwhile, the significant reduction in natural gas prices due to the North American shale-gas revolution is causing some automakers to offer vehicles that can run on compressed natural gas or gasoline. Proponents of biofuels are also exploring alternatives to corn-based ethanol that can meet environmental goals at a lower cost than an EV. Making ethanol from natural gas is one of the options under consideration. And some automakers believe that hydrogen fuel cells are the most attractive long-term solution, as the cost of producing fuel cell vehicles is declining rapidly.

TABLE 1

39

As attractive as some of the EV’s competitors may be, it is unlikely that regulators in California and other states will lose interest in EVs. (In theory, the ZEV mandate also gives manufacturers credit for cars with hydrogen fuel cells but the refueling infrastructure for hydrogen is developing even more slowly than it is for EVs). A coalition of eight states, including California, recently signed a Memorandum of Understanding aimed at putting 3.3 million EVs on the road by 2025. The states, which account for 23% of the national passenger vehicle market, have agreed to extend California’s ZEV mandate, hopefully in ways that will allow for compliance flexibility as to exactly where EVs are sold.

ZEV requirements do not necessarily reduce pollution or oil consumption in the near term, since they are not coordinated with national mileage and pollution caps. Thus, when more ZEVs are sold in California and other ZEV states, it frees automakers to sell more fuel-inefficient and polluting vehicles in non-ZEV states. Without better coordination between individual states and the federal policies, the laudable goals of the ZEV mandate could be frustrated.

All things considered, America’s push toward transport electrification is off to a modestly successful start, even though some of the early goals for market penetration were overly ambitious. Automakers were certainly losing money on their early EV models but that was true of conventional hybrids as well. The second generation of EVs now arriving in showrooms is likely to be more attractive to consumers, since they have been refined based on the experiences of early adopters. And as more recharging infrastructure is added, cautious consumers with “range anxiety” may become more likely to consider a BEV, or at least a PHEV.

Vehicle manufacturers and dealers are also beginning to focus on how to market the unique performance characteristics of an EV. Instead of touting primarily fuel savings or environmental virtue, marketers are beginning to echo a common sentiment of early adopters: EVs are enjoyable to drive because, with their relatively high torque and quiet yet powerful acceleration, they are a unique driving experience.

Now is not the right time to redo national EV policies. EVs and their charging infrastructure have not been available long enough to draw definitive conclusions. Vehicle manufacturers, suppliers, utilities, and local governments have made large EV investments with an understanding that federal auto-related policies will be stable until 2017, when a national mid-term policy review is scheduled.

It is not too early to frame some of the key issues that will need to be considered between now and 2017. First, are adequate public R&D investments being made in the behavioral as well as technological aspects of transport electrification? We believe that DOE needs to reaffirm the commitment to better battery technology while giving more priority to understanding the behavioral obstacles to all forms of green vehicles. Second, we question whether national policy should continue a primary focus on EVs. It may be advisable to stimulate a more diverse array of green vehicle technologies, including cars fueled by natural gas, hydrogen, advanced ethanol, and clean diesel fuel. Third, federal mileage and carbon standards may need to be refined to ensure cost-effectiveness and to provide a level playing field for the different propulsion systems. Fourth, highway-funding schemes need to shift from gasoline taxes to mileage-based road user fees in order to ensure that adequate funds are raised for road repairs and that owners of green vehicles pay their fair share. Fifth, California’s policies need to be better coordinated with federal policies in ways that accomplish environmental and security objectives and allow vehicle manufacturers some sensible compliance flexibility. Finally, on an international basis, policy makers in the European Union, Japan, Korea, China, California and the United States should work together to accomplish more regulatory cooperation in this field, since manufacturers of batteries, chargers, and vehicles are moving toward global platforms that can efficiently provide affordable technology to consumers around the world.

Coming to a policy consensus in 2017 will not be easy. In light of the fast pace of change and the many unresolved issues, we recommend that universities and think tanks begin to sponsor conferences, workshops, and white papers on these and related policy issues, with the goal of analyzing the available information to create well-grounded recommendations for action come 2017.

John D. Graham (grahamjd@indiana.edu) is dean, Joshua Cisney is a graduate student, Sanya Carley is an associate professor, and John Rupp is a senior research scientist at the School of Public and Environmental Affairs at Indiana University.

Streamlining the Visa and Immigration Systems for Scientists and Engineers

ALBERT H. TEICH

Current visa policies and regulations pose hurdles for the nation’s scientific and education enterprise. This set of proposals may offer an effective, achievable, and secure way forward.

Alena Shkumatava leads a research group at the Curie Institute in Paris studying how an unusual class of genetic material called noncoding RNA affects embryonic development, using zebrafish as a model system. She began this promising line of research as a postdoctoral fellow at the Massachusetts Institute of Technology’s Whitehead Institute. She might still be pursuing it there or at another institution in the United States had it not been for her desire to visit her family in Belarus in late 2008. What should have been a short and routine trip “turned into a three-month nightmare of bureaucratic snafus, lost documents and frustrating encounters with embassy employees,” she told the New York Times. Discouraged by the difficulties she encountered in leaving and reentering the United States, she left MIT at the end of her appointment to take a position at the Curie Institute.

Shkumatava’s experience, along with numerous variations, has become increasingly familiar—and troublesome for the nation. For the past 60 years, the United States has been a magnet for top science and engineering talent from every corner of the world. The contributions of hundreds of thousands of international students and immigrants have helped the country build a uniquely powerful, productive, and creative science and technology enterprise that leads the world in many fields and is responsible for much of the growth of the U.S. economy and the creation of millions of high-value jobs. A few statistics suggest just how important foreign-born talent is to U.S. science and technology:

  • More than 30% of all Nobel laureates who have won their prizes while working in the United States were foreign-born.
  • Between 1995 and 2005, a quarter of all U.S. high-tech startups included an immigrant among their founders.
  • Roughly 40% of Fortune 500 firms—Google, Intel, Yahoo, eBay, and Apple, among them—were started by immigrants or their children.
  • At the 10 U.S. universities that have produced the most patents, more than three out of every four of those patents involved at least one foreign-born inventor.
  • More than five out of six patents in information technology (IT) in the United States in 2010 listed a foreign national among the inventors.

But the world is changing. The United States today is in a worldwide competition for the best scientific and engineering talent. Countries that were minor players in science and technology a few years ago are rapidly entering the major leagues and actively pursuing scientific and technical talent in the global marketplace. The advent of rapid and inexpensive global communication and air travel that is within easy reach of researchers in many countries have fostered the growth of global networks of collaboration and are changing the way research is done. The U.S. visa and immigration systems need to change, too. Regulations and procedures have failed to keep pace with today’s increasingly globalized science and technology. Rather than facilitating international commerce in talent and ideas, they too often inhibit it, discouraging talented scientific visitors, students, and potential immigrants from coming to and remaining in the United States. They cost the nation the goodwill of friends and allies and the competitive advantage it could gain from their participation in the U.S. research system and from increased international collaboration in cutting-edge research efforts.

It is easy to blame the problems that foreign scientists, engineers, and STEM (science, technology, engineering, and mathematics) students encounter in navigating the U.S. visa and immigration system or the more intense scrutiny imposed on visitors and immigrants in the aftermath of 9/11. Indeed, there is no question that the reaction to the attacks of 9/11 caused serious problems for foreign students and scientific visitors and major disruptions to many universities and other scientific institutions. But many of the security-related issues have been remedied in the past several years. Yet hurdles remain, derived from a more fundamental structural mismatch between current visa and immigration policies and procedures and today’s global patterns of science and engineering education, research, and collaboration. If the United States is going to fix the visa and immigration system for scientists, engineers, and STEM students, it must address these underlying issues as well as those left over from the enhanced security regime of the post-9/11 era.

Many elements of the system need attention. Some of them involve visa categories developed years ago that do not apply easily to today’s researchers. Others derive from obsolescent immigration policies aimed at determining the true intent of foreigners seeking to enter the United States. Still others are tied to concerns about security and terrorism, both pre- and post-9/11. And many arise from the pace at which bureaucracies and legislative bodies adapt to changing circumstances. Here I offer a set of proposals to address these issues. Implementing some of the proposals would necessitate legislative action. Others could be implemented administratively. Most would not require additional resources. All are achievable without compromising U.S. security. Major components of these proposals include:

Simplify complex J-1 exchange visitor visa regulations and remove impediments to bona fide exchange. The J-1 visa is the most widely used type for visitors coming temporarily to the United States to conduct research or teach at U.S. institutions. Their stays may be as brief as a few weeks or as long as five years. The regulations governing the J-1 visa and its various subcategories, however, are complex and often pose significant problems for universities, research laboratories, and the scientific community, as illustrated by the following examples.

Implementing some of the proposals would necessitate legislative action. Others could be implemented administratively. All are achievable without compromising U.S. security.

A young German researcher, having earned a Ph.D. in civil and environmental engineering in his home country, accepted an invitation to spend 17 months as a postdoctoral associate in J-1 Research Scholar status at a prestigious U.S. research university. He subsequently returned to Germany. A year later, he applied for and was awarded a two-year fellowship from the German government to further his research. Although he had a U.S. university eager to host him for the postdoctoral fellowship, a stipulation in the J-1 exchange visitor regulations that disallows returns within 24 months prevented the university from bringing him back in the Research Scholar category. There was no other visa for such a stay, and the researcher ultimately took his talent and his fellowship elsewhere.

A tenured professor in an Asian country was granted a nine-month sabbatical, which he spent at a U.S. university, facilitated by a J-1 visa in the Professor category. He subsequently returned to his country of residence, his family, and his position. An outstanding scholar, described by a colleague as a future Nobel laureate, he was appointed a permanent visiting professor at the U.S. university the following year. Because of the J-1 regulations, however, unless he comes for periods of six months or less when he visits, he cannot return on the J-1 exchange visitor visa. And if he does return for six months or less multiple times, he must seek a new J-1 program document, be assigned a new ID number in the Student and Exchange Visitor Information System (SEVIS), pay a $180 SEVIS fee, and seek a new entry visa at a U.S. consulate before each individual visit. The current J-1 regulations also stipulate that he must be entering the United States for a new “purpose” each time, which could pose additional problems.

The J-1 is one of three visa categories used by most STEM students and professional visitors in scientific and engineering fields coming to the United States: F-1 (nonimmigrant student), J-1 (cultural or educational exchange visitor), or H-1B (temporary worker in a specialty occupation). B1/ B2 visas (visits for business, including conferences, or pleasure or a combination of the two) are also used in some instances. Each of these categories applies to a broad range of applicants. The F-1 visa, for example, is required not just for STEM students but for full-time university and college students in all fields, elementary and secondary school students, seminary students, and students in a conservatory, as well as in a language school (but not a vocational school). Similarly, the J-1 covers exchange visitors ranging from au pairs, corporate trainees, student “interns,” and camp counselors to physicians and teachers as well as professors and research scholars. Another J-1 category is for college and university students who are financed by the United States or their own governments or those participating in true “exchange” programs. The J-1 exchange visitor visa for research scholars and professors is, however, entangled in a maze of rules and regulations that impede rather than facilitate exchange.

In 2006, the maximum period of participation for J-1 exchange visitors in the Professor and Researcher categories was raised from three years to five years. That regulatory change was welcomed by the research community, in which grant funding for a research project or a foreign fellowship might exceed three years, but there was formerly no way to extend the J-1 visa of the researcher.

However, the new regulations simultaneously instituted new prohibitions on repeat exchange visitor program participation. In particular, the regulations prohibit an exchange visitor student who came to the United States to do research toward a Ph.D. (and any member of his family who accompanied him) from going home and then returning to the United States for postdoctoral training or other teaching or research in the Professor or Research Scholar category until 12 months have passed since the end of the previous J program.

A 24-month bar prohibits a former Professor or Researcher (and any member of her family who accompanied her) from engaging in another program in the Professor or Researcher category until 24 months have passed since the end date of the J-1 program. The exception to the bars is for professors or researchers who are hosted by their J program sponsor in the Short-Term Scholar category. This category has a limit of six months with no possibility of extension. The regulations governing this category indicate that such a visitor cannot participate in another stay as a Short-Term Scholar unless it is for a different purpose than the previous visit.

There are valid reasons for rules and regulations intended to prevent exchange visitors from completing one program and immediately applying for another. In other words, the rules should ensure that exchanges are really exchanges and not just a mechanism for the recruitment of temporary or permanent workers. It appears that the regulation was initially conceived to count J-1 program participation toward the five-year maximum in the aggregate. However, as written, the current regulations have had the effect of imposing the 24-month bar on visitors in the Professor and Researcher categories who have spent any period of participation (one month, seven months, or two years), most far shorter than the five-year maximum. Unless such a visitor is brought in under the Short-Term Scholar category (the category exempt from the bars) for six months or less only, the 24-month bar applies. Similarly, spouses of former J-1 exchange visitors in the Professor or Researcher categories who are also researchers in their own right and have spent any period as a J-2 “dependent” while accompanying a J-1 spouse are also barred from returning to the United States to engage in their own J-1 program as a Professor or Researcher until 24 months have passed. This applies whether or not that person worked while in the United States as a J-2. In addition, spouses subject to the two-year home residency requirement (a different, statutory bar based on a reciprocal agreement between the United States and foreign governments) cannot change to J-1 status inside the United States or seek a future J-1 program on their own.

The concept of “exchange,” born in the shadow of the Cold War, must be expanded to include the contemporary realities of worldwide collaboration.

U.S. universities are increasingly engaging in longer-term international research projects with dedicated resources from foreign governments, private industry, and international consortia, and are helping to build capacity at foreign universities, innovation centers, and tech hubs around the world. International researchers travel to the United States to consult, conduct research, observe, and teach the next generation of STEM students. The concept of “exchange,” born in the shadow of the Cold War, must be expanded to include the contemporary realities of worldwide collaboration and facilitate rather than inhibit frequent and repeat stays for varying periods.

In practice, this means rationalizing and simplifying J-1 exchange visitor regulations. Although an immigration reform bill developed in the Senate (S.744) makes several changes in the J-1 program that are primarily aimed at reducing abuses by employers who bring in international students for summer jobs, it does not address issues affecting research scholars or professors.

It may be possible, however, to make the needed changes by administrative means. In December 2008, the Department of State released a draft of revised regulations governing the J-1 exchange visitor visa with a request for comment. Included in the draft rule were changes to program administration, insurance requirements, SEVIS reporting requirements, and other proposed modifications. Although many comments were submitted, until recently there did not appear to be any movement on the provisions of most concern to the research community. However, the department is reported to have taken up the issue again, and a new version of the regulations is anticipated. This may prove to be a particularly opportune time to craft a regulatory fix to the impediments inherent in the 12- and 24-month bars.

Reconsider the requirement that STEM students demonstrate intent to return home. Under current immigration law, all persons applying for a U.S. visa are presumed to be intending to immigrate. Section 214(b) of the Immigration and Naturalization Act, which has survived unchanged since the act was passed in 1952, states, “Every alien shall be presumed to be an immigrant until he establishes to the satisfaction of the consular officer, at the time of application for admission, that he is entitled to a nonimmigrant status…”

In practice, this provision means that a person being interviewed for a nonimmigrant visa, such as a student (F-1) visa, must persuade the consular officer that he or she does not intend to remain permanently in the United States. Simply stating the intent to return home after completion of one’s educational program is not enough. The applicant must present evidence to support that assertion, generally by showing strong ties to the home country. Such evidence may include connections to family members, a bank account, a job or other steady source of income, or a house or other property. For students, especially those from developing nations, this is often not a straightforward matter, and even though U.S. consular officers are instructed to take a realistic view of these young people’s future plans and ties, many visa applicants fail to meet this subjective standard. It is not surprising, therefore, that the vast majority of visa denials, including student visas, are due to 214(b), because of failure to overcome the presumption of immigrant intent.

The Immigration and Naturalization Act was written in an era when foreign students in the United States were relatively rare. In 1954–1955, for example, according to the Institute for International Education, there were about 34,000 foreign students studying in higher education institutions in the United States. In contrast, in 2012–2013 there were more than 819,000 international students in U.S. higher education institutions, nearly two-thirds of them at doctorate-granting universities. In the early post–World War II years, the presence of foreign students was regarded as a form of international cultural exchange. Today, especially in STEM fields, foreign graduate students and postdocs make up a large and increasingly essential element of U.S. higher education. According to recent (2010) data from the National Science Foundation, over 70% of full-time graduate students (master’s and Ph.D.) in electrical engineering and 63% in computer science in U.S. universities are international students. In addition, non-U.S. citizens (not including legal permanent residents) make up a majority of graduate students nationwide in chemical, materials, and mechanical engineering.

In the sense that it prevents prospective immigrants from using student visas as a “back door” for entering the United States (that is, if permanent immigrant status is the main, but unstated, purpose of seeking a student visa), it might be argued that 214(b) is serving its intended purpose. The problem, however, is the dilemma it creates for legitimate students who must demonstrate the intent to return home despite a real and understandable uncertainty about their future plans.

Interestingly, despite the obstacles that the U.S. immigration system poses, many students, especially those who complete a Ph.D. in a STEM field, do manage to remain in the country legally after finishing their degrees. This is possible because employment-based visa categories are often available to them and permanent residence, if they qualify, is also a viable option. The regulations allow F-1 visa holders a 60-day grace period after graduation. In addition, graduating students may receive a one-year extension for what is termed Optional Practical Training (OPT), so long as they obtain a job, which may be a paying position or an unpaid internship. Those who receive a bachelor’s, master’s, or doctorate in a STEM field at a U.S. institution may be granted a one-time 17-month extension of their OPT status if they remain employed.

While on F-1 OPT status, an individual may change status to an H-1B (temporary worker) visa. Unlike the F-1 visa, the H-1B visa does allow for dual intent. This means that the holder of an H-1B visa may apply for permanent resident status—that is, a green card—if highly qualified. This path from student status to a green card, circuitous though it may be, is evidently a popular one, especially among those who receive doctorates, as is shown by the data on “stay rates” for foreign doctorate recipients from U.S. universities.

Michael G. Finn of the Oak Ridge Institute for Science and Education has long tracked stay rates of foreign citizens who receive STEM doctorates in the United States. His 2009 report (the most recent available) indicates that of 9,223 foreign nationals who received science and engineering doctorates at U.S. universities in 1999, two-thirds were still in the United States 10 years later. Indeed, among those whose degrees were in physical and life sciences, the proportion remaining in the United States was about three-quarters.

Reform of 214(b) poses something of a dilemma. Although State Department officials understandably prefer not to discuss it in these terms, they evidently value the broad discretion it provides consular officers to exclude individuals who they suspect, based on their application or demeanor, pose a serious risk of absconding and/or overstaying their visa, but without having to provide specific reasons. One might argue that it is important to give consular officers such discretion, since they are, in most cases, the only officials from either the federal government or the relevant academic institution who actually meet the applicant face-to-face.

On the other hand, 214(b) may also serve to deter many otherwise well-qualified potential students from applying, especially those from developing nations, who could become valuable assets for the United States or their home countries with a U.S. STEM education.

What is needed is a more flexible policy that provides the opportunity for qualified international students who graduate with bachelor’s, master’s, or Ph.D. STEM degrees to remain in the United States if they choose to do so without allowing the student visa to become an easy way to subvert regulations on permanent immigration. It makes no sense to try to make such distinctions by denying the fact that someone who is applying to study in the United States may be uncertain about their plans four (or more) years later.

Because 214(b) is part of the Immigration and Naturalization Act, this problem requires a legislative fix. The immigration reform bill that passed the Senate in June 2013 (S.744) contains a provision that would allow dual intent for nonimmigrant students seeking bachelor’s or graduate degrees. [The provision applies to students in all fields, not just STEM fields. A related bill under consideration in the House of Representatives (H.R.2131) provides dual intent only for STEM students. However, no action has been taken on it to date.] Some version of this approach, which provides for discretion on the part of the consular officer without forcing the student visa applicant to make a choice that he or she is not really capable of making, is a more rational way to deal with this difficult problem.

Speed up the Visas Mantis clearance process and make it more transparent. A major irritant in the visa and immigration system for scientists, engineers, and STEM students over the past decade has been the delays in visa processing for some applicants. A key reason for these delays is the security review process known as Visas Mantis, which the federal government put in place in 1998 and which applies to all categories of nonimmigrant visas. Although reforms over the past several years have eased the situation, additional reforms could further improve the process.

Initially intended to prevent transfers of sensitive technologies to hostile nations or groups, Visas Mantis was used at first in a relatively small number of cases. It gained new prominence, however, in the wake of 9/11 and the heightened concern over terrorism and homeland security that followed. The number of visa applicants in scientific and engineering fields subject to Mantis reviews took a sudden jump in 2002 and 2003, causing a logjam of applications and no end of headaches for the science, engineering, and higher education communities. The number of Mantis reviews leapt from 1,000 cases per year in 2000 to 14,000 in 2002 and an estimated 20,000 in 2003. The State Department and the other federal agencies involved were generally unprepared for the increased workload and were slow to expand their processing capacity. The result was a huge backlog of visa applications and lengthy delays for many foreign students and scientists and engineers seeking to come to the United States. The situation has improved since then, although there have been occasional slowdowns, most likely resulting from variations in workload or staffing issues.

The Mantis process is triggered when a consular officer believes that an applicant might not be eligible for a visa for reasons related to security. If the consular officer determines that security concerns exist, he or she then requests a “security advisory opinion” (SAO), a process coordinated through an office in the State Department in which a number of federal agencies review the application. (The federal government does not provide the names of the agencies involved in an SAO, but the MIT International Scholars Office lists the FBI, CIA, Drug Enforcement Agency, Department of Commerce, Office of Foreign Assets Control, the State Department Bureau of International Security and Nonproliferation, and others, which seems like a plausible list.) Consideration of the application is held up pending approval by all of the agencies. The applicant is not informed of the details of the process, only that the application is undergoing “administrative processing.”

In most cases, the decision to refer an application for an SAO is not mandatory but is a matter of judgment on the part of the consular officer. Because most consular officers do not have scientific or technical training, they generally refer to the State Department’s Technology Alert List (TAL) to determine whether an application raises security concerns. The current TAL is classified, but the 2002 version is believed to be similar and is widely available on the Internet (for example, at http://www.bu.edu/isso/forms/tal.pdf). It contains such obviously sensitive areas as nuclear technology and ballistic missile systems, as well as “dual-use” areas such as fermentation technology and pharmacology, the applications of which are generally regarded as benign but can also raise security concerns. According to the department’s Foreign Affairs Manual, “Officers are not expected to be versed in all the fields on the list. Rather, [they] should shoot for familiarization and listen for key words or phrases from the list in applicants’ answers to interview questions.” It is also suggested that the officers consult with the Defense and Homeland Security attachés at their station. The manual notes that an SAO “is mandatory in all cases of applicants bearing passports of or employed by states designated as state sponsors of terrorism” (currently Cuba, Iran, Sudan, and Syria) engaged in commercial or academic activities in one of the fields included in the TAL. As an aside, it is worth noting that although there are few if any students from Cuba, Sudan, and Syria in the United States, Iran is 15th among countries of origin of international students, ahead of such countries as France, Spain, and Indonesia, and a majority of Iranian students (55%) are majoring in engineering fields.

In the near-term aftermath of 9/11, there were months when the average time to clear a Mantis SAO reached nearly 80 days. Within a year, however, it had declined to less than 21 days, and more recently, despite the fact that the percentage of F-1, J-1, and H-1B applications subject to Mantis SAO processing reached 10% in 2010, according to State Department data, the average processing time is two to three weeks. Nevertheless, cases in which visas are reported to be in “administrative processing” for several months or even longer are not uncommon. In fact, the State Department tells applicants to wait at least 60 days from the date of their interview or submission of supplementary documents to inquire about the status of an application under administrative processing.

In most cases, Mantis clearances for students traveling under F visas are valid for the length of their educational programs up to four years, as long as they do not change programs. However, students from certain countries (e.g., Iran) require new clearances whenever they leave the United States and seek to reenter. Visas Mantis clearances for students and exchange visitors under J visas and temporary workers under H visas are valid for up to two years, unless the nature of their activity in the United States changes. And B visa clearances are good for a year with similar restrictions.

The lack of technical expertise among consular officers is a concern often expressed among scientists who deal with visa and immigration issues. The fact that most such officers are limited in their ability to make independent judgments (for example, on the need for a Mantis review of a researcher applying for a J-1 exchange visitor visa) may well increase the cost of processing the visa as well as lead to unnecessary delays. The National Academy of Sciences report Beyond Fortress America, released in 2009, suggested that the State Department “include expert vouching by qualified U.S. scientists in the non-immigrant visa process for well-known scholars and researchers.” This idea, attractive as it sounds to the science community, seems unlikely to be acceptable to the State Department. Although “qualified U.S. scientists” could attest to the scientific qualifications and reputations of the applicants, they would not be able to make informed judgments on potential security risks and therefore could not substitute for Mantis reviews.

An alternative that might be more acceptable would be to use scientifically trained staff within the State Department—for example, current and former American Association for the Advancement of Science (AAAS) Science and Technology Policy Fellows or Jefferson Science Fellows sponsored by the National Academies—as advisers to consular officers. Since 1980, AAAS has placed over 250 Ph.D. scientists and engineers from a wide range of backgrounds in the State Department as S&T Policy Fellows. Over 100 are still working there. In the 2013–2014 fellowship year, there were 31. In addition, there were 13 Jefferson Science Fellows—tenured senior faculty in science, engineering, or medicine—at the State Department or the Agency for International Development, a number that has grown steadily each year since the program was started in 2004. These highly qualified individuals, a few of whom are already stationed at embassies and consulates, should be available on an occasional basis to augment consular officers’ resources. They, and other Foreign Service Officers with technical backgrounds, would be especially useful in countries that send large numbers of STEM students and visitors to the United States, such as China, India, and South Korea.

Measures that enhance the capacity of the State Department to make technical judgments could be implemented administratively, without the need for legislative action. A policy that would limit the time available for the agencies involved in an SAO to review an application could also be helpful. Improving the transparency of the Mantis process poses a dilemma. If a visa applicant poses a potential security risk, the government can hardly be expected to inform the applicant about the details of the review process. Nevertheless, since the vast majority of Mantis reviews result in clearing the applicant, it might be beneficial to both the applicant and the government to provide periodic updates on the status of the review without providing details, making the process at least seem a little less Kafkaesque.

Allow scientists and scholars to apply to renew their visas in the United States. Many students, scholars, and scientists are in the United States on long-term programs of study, research, or teaching that may keep them in the country beyond the period of validity of their visas. Although U.S. Citizenship and Immigration Services (USCIS) is able to extend immigration status as necessary to cover these programs, approval of status extension from USCIS is not the same thing as a valid visa that would enable international travel. Often, due to the need to attend international conferences, attend to personal business, or just visit family, students and scholars can find themselves in a situation where they have temporarily departed the United States but are unable to return without extensive delays for processing a visa renewal abroad. As consular sections may be uncomfortable positively adjudicating visa applications for those outside of their home country, it is not uncommon for applicants to be asked to travel from a third country back to their country of origin for visa processing, resulting in even greater expense and delay.

Until June 2004, the Department of State allowed many holders of E, H1-B, L, and O visas to apply for visa renewal by mail. This program was discontinued in the wake of 9/11 because of a mixture of concerns over security, resource availability, and the implementation of the then-new biometric visa program. Now, however, every nonimmigrant visa holder in the United States has already had electronic fingerprints collected as part of their visa record. Security screening measures have been greatly improved in the past decade. In addition, the Omnibus Spending Bill passed in early 2014 included language directing the State Department to implement a pilot program for the use of videoconferencing technology to conduct visa interviews. The time is right to not only reinstitute the practice of allowing applications for visa renewal inside the United States for those categories previously allowed, but also to expand the pool of those eligible for domestic renewal to include F-1 students and J-1 academic exchanges.

What is needed is a more flexible policy that provides the opportunity for qualified international students to remain in the United States without allowing the student visa to become an easy way to subvert regulations on permanent immigration.

Reform the H-1B visa to distinguish R&D scientists and engineers from IT outsourcers. Discussion of scientists, engineers, and STEM students has received relatively little attention in the current debate on immigration policy, with one significant exception: the H-1B visa category. This category covers temporary workers in specialty occupations, including scientists and engineers in R&D (as well as, interestingly enough, fashion models of “distinguished merit and ability”). An H-1B visa is valid for three years, extendable for another three. The program is capped at 65,000 each fiscal year, but an additional 20,000 foreign nationals with advanced degrees from U.S. universities are exempt from this ceiling, and all H-1B visa holders who work at universities and university- and government-affiliated nonprofits, including national laboratories are also exempt.

Controversy has swirled about the H-1B program for the past several years as advocates of the program, citing shortages of domestic talent in several fields, have sought to expand it, while critics, denying the existence of shortages, express concern instead about unemployment and underemployment among domestically trained technical personnel and have fought expansion. Moreover, although the H-1B visa is often discussed as if it were a means of strengthening U.S. innovation by bringing more scientists and engineers to the United States or retaining foreign scientists and engineers who have gained a degree in this country, the program increasingly seems to serve a rather different purpose. Currently, the overwhelming majority of H-1B recipients work in computer programming, software, and IT. In fact, the top H-1B visa job title submitted by U.S. employers in fiscal 2013 was programmer analyst, followed by software engineer, computer programmer, and systems analyst. At least 21 of the top 50 job titles were in the fields of computer programming, software development, and related areas. The top three company sponsors of H-1B visa recipients were IT firms (Infosys Limited, Wipro, and Tata Consultancy Services, all based in India) as were a majority of the top 25. Many of these firms provide outsourcing of IT capabilities to U.S. firms with foreign (mainly Indian) staff working under H-1Bs. This practice has come under increasing scrutiny recently as the largest H-1B sponsor, Infosys, paid a record $34 million to settle claims of visa abuse brought by the federal government. Visa abuse aside, it is difficult to see how these firms and the H-1B recipients they sponsor contribute to strengthening innovation in the United States.

Reform of the H-1B program has been proposed for years, and although little action has been taken so far, this may change soon as the program is under active discussion as part of the current immigration debate. Modifications included in the Senate bill (S.744) would affect several important provisions of the program. The annual cap on H-1B visas would be increased from 65,000 to a minimum of 115,000, which could be raised to 180,000. The exemption for advanced degree graduates would be increased from 20,000 to 25,000 and would be limited to STEM graduates only. Even more important, the bill would create a new merit-based point system for awarding permanent residency permits (green cards). Under it, applicants would receive points for education, the number increasing from bachelor’s to doctoral degrees. Although there would be a quota for these green cards, advanced degree recipients from U.S. universities would be exempt, provided the recipient received his or her degree from an institution with a Carnegie classification of “very high” or “high” research activity, has an employment offer from a U.S. employer, and received the degree no more than five years before applying. This would be tantamount to “stapling a green card to the diploma”—terminology suggested by some advocates—and would bypass the H-1B program entirely.

The Senate bill retains the exemption of visa holders who work at universities and university- and government-affiliated nonprofits from the H-1B cap. Expanding this exemption to include all Ph.D. scientists and engineers engaged in R&D is also worth considering, although it does not appear to be part of either the Senate or the House bills. This would put Ph.D. researchers and their employers in a separate class from the firms that use the program for outsourcing of IT personnel. It would remove the issues relating to H-1B scientists and engineers from the debate over outsourcing and allow them to be discussed on their own merits—namely, their contribution to strengthening R&D and innovation in the United States.

Expand the Visa Waiver Program to additional countries. The Visa Waiver Program (VWP) allows citizens of a limited number of countries (currently 37) to travel to the United States for certain purposes without visas. Although it does not apply to students and exchange visitors under F and J visas, it does include scientists and engineers attending conferences and conventions who would otherwise travel under a B visa, as well as individuals participating in short-term training (less than 90 days) and consulting with business associates.

There is little doubt that the ability to travel without going through the visa process—application, interview, security check—greatly facilitates a visit to the United States for those eligible. The eligible countries include mainly the European Union nations plus Australia, New Zealand, South Korea, Singapore, and Taiwan. Advocates of reforming visa policy make a convincing argument that expanding the program to other countries would increase U.S. security. Edward Alden and Liam Schwartz of the Council on Foreign Relation suggest just that in a 2012 paper on modernizing the U.S. visa system. They note that travelers under the VWP are still subject to the Electronic System of Travel Authorization (ESTA), a security screening system that vets individuals planning to come to the United States with the same intelligence information that is used in visa screening. Security would be enhanced rather than diminished by expanding the VWP, they argue, because governments of the countries that participate in the program are required to share security and criminal intelligence information with the U.S. government.

Visa-free travel to conferences and for short-term professional visits by scientific and engineering researchers from the 37 countries in the VWP makes collaboration with U.S. colleagues much easier than it would otherwise be. And it would undoubtedly be welcomed by those in countries that are likely candidates for admission to the program. Complicating matters, however, is legislation that requires the Department of Homeland Security (DHS) to implement a biometric exit system (i.e., one based on taking fingerprints of visitors as they leave the country and matching them with those taken on entry) before it can expand the VWP. The federal government currently has a “biographic” system that matches names on outbound manifests provided by the airlines with passport information obtained by U.S. Customs and Border Protection on a person’s entry. A biometric exit system would provide enhanced security, but the several-billion-dollar cost and the logistics of implementing a control system pose formidable barriers. Congress and the Executive Branch have engaged in a tug of war over the planning and development of such a system for over a decade. (The Intelligence Reform and Terrorism Prevention Act of 2004 called for DHS to develop plans for accelerating implementation of such a system, but the department has missed several deadlines and stated in mid-2013 that it was intending to incorporate these plans in its budget for fiscal year 2016.) Should DHS get to the point of actually implementing a biometric exit system, it could pave the way for expanding the VWP. In the meantime, a better solution would be to decouple the two initiatives. S.744 does just that by authorizing the Secretary of Homeland Security to designate any country as a member of the VWP so long as it meets certain conditions. Expansion of the VWP is also included in the House immigration reform bill known as the JOLT Act. These are hopeful signs, although the comprehensive immigration reform logjam continues to block further action.

Action in several other areas can also help to improve the visa process. The federal government, for example, can encourage consulates to use their recently expanded authority to waive personal interviews. In response to an executive order issued by President Obama in January 2012, the State Department initiated a two-year visa interview waiver pilot program. Under the program, visa-processing posts in 28 countries were authorized to waive interviews with certain visa applicants, especially repeat visitors in a number of visa classes. Brazil and China, which have large numbers of visa applicants, were among the initial countries involved in this experimental program. U.S. consulates in India joined the program a few months later. The initiative was welcomed in these countries and regarded as successful by the Departments of State and Homeland Security. The program was made permanent in January 2014. Currently, consular officers can waive interviews for applicants for renewal of any nonimmigrant visa as long as they are applying for a visa in the same classification within 12 months of the expiration of the initial visa (48 months in some visa classes).

Although the interview waiver program was not specifically aimed at scientists, and statistics regarding their participation in the program are not available, it seems likely that they were and will continue to be among the beneficiaries now that the program has been made permanent. The initiative employs a risk-based approach, focusing more attention on individuals who are judged to be high-risk travelers and less on low-risk persons. Since it allows for considerable discretion on the part of the consulate, its ultimate value to the scientific and educational communities will depend on how that discretion is used.

The government can also step up its efforts to increase visa-processing capacity. In response to the 2012 executive order, the State Department and DHS launched an initiative to increase visa-processing capacity in high-demand countries and reduce interview wait times. In a report issued in August 2012 on progress during the first 180 days of activity under the initiative, the two agencies projected that by the end of 2012, “State will have created 50 new visa adjudicator positions in China and 60 in Brazil.” Furthermore, the State Department deployed 220 consular officers to Brazil on temporary duty and 48 to China. The consulates also increased working hours, and in Brazil they remained open on occasional Saturdays and holidays. These moves resulted in sharp decreases in processing time.

These initiatives have been bright spots in an otherwise difficult budget environment for the State Department. That budget environment, exacerbated by sequestration, increases the difficulty of making these gains permanent and extending them to consular posts in other countries with high visa demand. This is a relatively easy area to neglect, but one in which modest investments, especially in personnel and training, could significantly improve the face that the United States presents to the world, including the global scientific, engineering, and educational communities.

Looking at U.S. universities and laboratories today, one might well ask whether there really is a problem with the nation’s visa and immigration policies. After all, the diversity of nationalities among scientists, engineers, and students in U.S. scientific institutions is striking. At the National Institutes of Health, over 60% of the approximately 4,000 postdocs are neither U.S. citizens nor permanent residents. They come from China, India, Korea, and Japan, as well as Europe and many other countries around the world. The Massachusetts Institute of Technology had over 3,100 international students in 2013, about 85% of them graduate students, representing some 90 countries. The numbers are similar at Stanford, Berkeley, and other top research universities.

So how serious are the obstacles for international scientists and students who really want to come to the United States? Does the system really need to be streamlined? How urgent are the fixes that I have proposed here?

The answers to these questions lie not in the present and within the United States, but in the future and in the initiatives of the nations with which we compete and cooperate. Whereas the U.S. system creates barriers, other countries, many with R&D expenditures rising much more rapidly than in the United States, are creating incentives to attract talented scientists to their universities and laboratories. China, India, Korea, and other countries with substantial scientific diasporas have developed programs to encourage engagement with their expatriate scientists and potentially draw them back home.

In the long run, the reputations of U.S. institutions alone will not be sufficient to maintain the nation’s current advantage. The decline in enrollments among international students after 9/11 shows how visa delays and immigration restrictions can affect students and researchers. As long as the United States continues to make international travel difficult for promising young scholars such as Alena Shkumatava, it is handicapping the future of U.S. science and the participation of U.S. researchers in international collaborations. Streamlining visa and immigration policies can make a vital contribution to ensuring the continued preeminence of U.S. science and technology in a globalized world. We should not allow that preeminence to be held hostage to the nation’s inability to enact comprehensive immigration reform.

Albert H. Teich (ateich@gwu.edu) is research professor of science, technology, and international affairs at the Elliott School of International Affairs at George Washington University, Washington, DC. Notes and acknowledgements are available at http://alteich.com/visas/Notes.htm.

Forum

Climate deadlock

In “Breaking the Climate Deadlock” (Issues, Summer 2014), David Garman, Kerry Emanuel, and Bruce Phillips present a thoughtful proposal for greatly expanded public- and private-sector R&D aimed at reducing the costs, increasing the reliability, managing the risks, and expanding the potential to rapidly scale up deployment of a broad suite of low- and zero-carbon energy technologies, from renewables to advanced nuclear reactor technologies to carbon capture and storage. They also encourage dedicated funding of research into potential geoengineering technologies for forced cooling of the climate system. Such an “all-of-the-above” investment strategy, they say, might be accepted across the political spectrum as a pragmatic hedge against uncertain and potentially severe climate risks and hence be not only sensible but feasible to achieve in our nation’s highly polarized climate policy environment.

It is a strong proposal as far as it goes. Even as the costs of wind and solar photovoltaics are declining, and conservative states such as Texas and Kansas are embracing renewable energy technologies and policies, greater investment in research aimed at expanding the portfolio of commercially feasible and socially acceptable low-carbon electricity is needed to accelerate the transition to a fully decarbonized energy economy. And managing the risks of a warming planet requires contingency planning for climate emergencies. As challenging as it may be to contemplate the deployment of most currently proposed geoengineering schemes, our nation has a responsibility to better understand their technical and policy risks and prospects should they ultimately need to be considered.

But it does not go far enough. Garman et al.’s focus on R&D aimed primarily at driving down the “cost premium” of low-carbon energy technologies relative to fossil fuels neglects the practical need and opportunity to also incorporate into the political calculus the substantial economic risks and costs of unmitigated climate change. Yet these risks and costs are substantial and are becoming increasingly apparent to local civic and political leaders in red and blue states alike as they are faced with more extensive storm surges and coastal flooding, more frequent and severe episodes of extreme summer heat, and other climate-related damages.

The growing state and local experience of this “cost of inaction premium” for continued reliance on fossil fuels is now running in parallel with the experience of economic benefits resulting from renewable electricity standards and energy efficiency standards in several red states. Together, these state and local experiences may do as much as or more than expanding essential investments in low-carbon energy R&D to break the climate deadlock and rebuild bipartisan support for sensible federal climate policies.

PETER C. FRUMHOFF
Director of Science and Policy Union of Concerned Scientists Cambridge, Massachusetts
pfrumhoff@ucsusa.org

 

We need a new era of environmentalism to overcome the polarization surrounding climate change issues, one that takes conservative ideas and concerns seriously and ultimately engages ideological conservatives as full partners in efforts to reduce carbon emissions.

Having recently founded a conservative animal and environmental advocacy group called Earth Stewardship Alliance (esalliance.org), I applaud “Breaking the Climate Deadlock.” The authors describe a compelling policy framework for expanding low-carbon technology options in a way that maintains flexibility to manage uncertainties.

5

The article also demonstrates the most effective approach to begin building conservative support for climate policies in general. The basic elements are to respect conservative concerns about climate science and to promote solutions that are consistent with conservative principles. Although many climate policy advocates see conservatives as a lost cause, relatively little effort has been made to try this approach.

Thoughtful conservatives generally agree that carbon emissions from human activities are increasing global carbon dioxide levels, but they question how serious the effects will be. These conservatives are often criticized for denying the science even though, as noted by “Breaking the Climate Deadlock,” there is considerable scientific uncertainty surrounding the potential effects. This article, however, addresses this legitimate conservative skepticism by describing how a proper risk assessment justifies action to avoid potentially catastrophic impacts even if there is significant uncertainty.

The major climate policies that have been advanced thus far in the United States are also contrary to conservative principles. All of the cap-and-trade bills that Congress seriously considered during the 2000s would have given away emissions allowances, making the legislation equivalent to a tax increase. The rise in prices caused by a cap-and-trade program’s requirement to obtain emissions allowances is comparable to a tax. Giving away the allowances foregoes revenue that could be used to reduce other taxes and thus offset the cap-and-trade tax. Many climate policy advocates wanted the allowances to be auctioned, but that approach could not gain traction in Congress, because the free allowances were needed to secure business support.

After the failure of cap-and-trade, efforts turned to issuing Environmental Protection Agency (EPA) regulations that reduce greenhouse gas emissions. The EPA’s legal authority for the regulations is justified by some very general provisions of the Clean Air Act. Although the courts will probably uphold many of these regulations, the policy decisions involved are too big to be properly made by the administration without more explicit congressional authorization.

Despite the polarization surrounding climate change, there continues to be support in the conservative intelligentsia for carbon policies consistent with their principles: primarily ramping up investment in low-carbon technology research, development, and demonstration and a “revenue-neutral” carbon tax in which the increased revenues are offset by cutting other taxes.

Earth Stewardship Alliance believes the best way to build strong conservative support for these policies is by making the moral case for carbon emissions reductions, emphasizing our obligation to be good stewards. We are hopeful that conservatives will ultimately decide it is the right thing to do.

JIM PRESSWOOD
Executive Director Earth Stewardship Alliance Arlington, Virginia
info@esalliance.org

 

David Garman, Kerry Emanuel, and Bruce Phillips lay out a convincing case for the development of real low-carbon technology options. This is not just a theoretical strategy. There are some real opportunities before us right now to do this, ones that may well appeal across the political spectrum:

The newly formed National Enhanced Oil Recovery Initiative (a coalition of environmental groups, utilities, labor, oil companies, coal companies, and environmental and utility regulators) has proposed a way to bring carbon capture and storage projects to scale, spurring in-use innovation and driving costs down. Carbon dioxide captured from power plants has a value—as much as $40 per ton in the Gulf region—because it can be used to recover more oil from existing fields. Capturing carbon dioxide, however, costs about $80 per ton. A tax credit that would cover the difference gap could spur a substantial number of innovative projects. Although oil recovery is not the long-term plan for carbon capture, it will pay for much capital investment and the early innovation that follows in its wake. The initiative’s analysis suggests that the net impact on the U.S. Treasury is likely to be neutral, because tax revenue from domestic oil that displaces imports can equal or exceed the cost of the tax credit.

There are dozens of U.S.-originated designs for advanced nuclear power reactors that could dramatically improve safety, lower costs, and shrink wastes as well as making them less harmful. The cost of pushing these designs forward to demonstration are modest, likely in the range of $1 billion to $2 billion per year, or about half a percent of the nation’s electric bill. The United States remains the world’s center of nuclear innovation, but many companies, frustrated by the lack of U.S. government support, are looking to demonstrate their first-of-a-kind designs in Russia and China. This is a growth-generating industry that the United States can recapture.

The production tax credit for conventional wind power has expired, due in part to criticisms that the tax credit was simply subsidizing current technology that has reached the point of diminishing cost reductions. But we can replace that policy with a focused set of incentives for truly innovative wind energy designs that increase capacity and provide grid support, thus enhancing the value of wind energy and bringing it closer to market parity.

Gridlock over climate science needn’t prevent practical movement forward to hedge our risks. A time-limited set of policies such as those above would drive low-carbon technology closer to parity with conventional coal and gas, not subsidize above-market technologies indefinitely. Garman and his colleagues have offered an important bridge-building concept; it is time for policymakers to take notice and act.

ARMOND COHEN
Executive Director Clean Air Task Force Boston, Massachusetts
armond@catf.us

Archives

Twister

To create his self-portrait, Twister, Dan Collins, a professor of intermedia in the Herberger Institute School of Art at Arizona State University (ASU), spun on a turntable while being digitally scanned. The data were recorded in 1995, but he had to wait more than five years before he could find a computer with the ability to do what he wanted. He used a customized computer to generate a model based on the data. Collins initially produced a high-density foam prototype of the sculpture, and later created an edition of three bonded marble versions of the work, one of which is in the collection of ASU’s Art Museum.

96

DAN COLLINS, Twister, 3D laser-scanned figure, Castable bonded marble, 84′′ high, 1995–2012.

Natural Histories

400 Years of Scientific Illustration fromthe Museum’s Library

In a time of the internet, social media networks, and smart phones, when miraculous devices demand our attention with beeps, buzzes, and spiffy animations, it’s hard to imagine a time when something as quiet and unassuming as a book illustration was considered cutting-edge technology. Yet, since the early 1500s, illustration has been essential to scientists for sharing their work with colleagues and with the public.

57-01

Young Hippo This image from the Zoological Society of London provides two views of a young hippo in Egypt before being transported to the London Zoo. Joseph Wolf (1820–1899) based the image on a sketch made by the British Consul on site in Cairo.

57-02

Rhino by Dürer This depiction of a rhino from Historia animalium, by German artist Albrecht Dürer, inaccurately features ornate armor, scaly skin, and odd protrusions.

58-01

Mandrill This mandrill (Mandrillus sphinx), with its delicate hands, cheerful expression, and almost upright posture, seems oddly human. While many images in Johann Christian Daniel von Schreber’s Mammals Illustrated (1774–1846) were quite accurate, those of primates generally were not.

58-02

Darwin’s Rhea John Gould drew this image of a Rhea pennata, a flightless bird native to South America, for The zoology of the voyage of H.M.S. Beagle (1839–1843), a five-volume work edited by Charles Darwin. The specimens Darwin collected during his travels on H.M.S. Beagle became a foundation for Darwin’s theory of evolution by natural selection.

The nearly forgotten books stored away quietly in libraries contain the ancestral ideas of current practices and methodologies of illustration.

A variety of printing techniques, ranging from woodcuts to engraving to lithography, proved highly effective for spreading new knowledge about nature and human culture to a growing audience. Illustrated books allowed the lay public to share in the excitement of discoveries, from Antarctica to the Amazon, from the largest life forms to the microscopic.

59-01

Two-toed Sloth Albertus Seba’s (1665–1736) four-volume Thesaurus (after Thesaurus of animal specimens) illustrated the Dutch apothecary’s enormous collection of animal and plant specimens amassed over the years. Using preserved specimens, Seba’s artists could depict anatomy accurately—but not behavior. For example, this two-toed sloth is shown climbing upright, even though in nature, sloths hang upside down.

The early impact of illustration in research, education, and communication arguably formed the foundation for how current illustration and imaging techniques are utilized today. Now scientists have a vast number of imaging tools that are harnessed in a variety of ways: infrared photography, scanning electron microscopes, computed tomography scanners and more. But there is still a role for illustration in making the invisible visible. How else can we depict extinct species such as dinosaurs?

59-02

Egg Collection In his major encyclopedia of nature, Allgemeine Naturgeschichte für alle Stände (A general natural history for everyone), German naturalist Lorenz Oken (1779–1851) grouped animals based not on science, but philosophy. Nevertheless, his encyclopedia proved to be a popular and enduring work. Here Oken is illustrating variation in egg color and markings found among water birds.

60

Paper Nautilus Italian naturalist Giuseppe Saverio Poli (1746–1825) is considered to be the father of malacology—the study of mollusks. In his landmark work Testacea utriusque Siciliae…(Shelled animals of the Two Sicilies…), Poli first categorized mollusks by their internal structure, and not just their shells, as seen in his detailed illustration of female paper nautilus (Argonauta argo).

61-01

Octopus From Conrad Gessner’s Historia animalium (1551–1558), this octopus engraving is a remarkably good likeness—except for the depiction of round, rather than slit-shaped, pupils—indicating the artist clearly did not draw from a live specimen.

61-03

Siphonophores German biologist Ernst Haeckel illustrated and described thousands of deep-sea specimens collected during the 1873–1876 H.M.S. Challenger expedition, and used many of those images to create Kunstformen der Natur (Art forms of nature). Haeckel used a microscope to capture the intricate structure of these siphonophores—colonies of tiny, tightly packed and highly specialized organisms—that look (and sting!) like sea jellies.

61-02

Angry Puffer Fish and Others Louis Renard’s artists embellished their work to satisfy Europeans’ thirst for the unusual. Some illustrations in Poissons, écrevisses et crabes, de diverses couleurs et figures extraordinaires…, like this one, include fish with imaginative colors and patterns and strange, un-fishlike expressions.

Illustration is also used to clearly represent complex structures, color graduations, and other essential details. The nearly forgotten books stored away quietly in libraries contain the ancestral ideas of current practices and methodologies of illustration.

In the days before photography and printing, original art was the only way to capture the likeness of organisms, people, and places, and therefore the only way to share this information with others,” said Tom Baione, the Harold Boeschenstein Director of the Department of Library Services at the American Museum of Natural History. “Printed reproductions of art about natural history enabled many who’d never seen an octopus, for example, to try to begin to understand what an octopus looked like and how its unusual features might function.”

62

Frog Dissection A female green frog (Pelophylax kl. esculentus) with egg masses is shown in dissection above a view of the frog’s skeleton in the book Historia naturalis ranarum nostratium…(Natural history of the native frogs…) from 1758. Shadows and dissecting pins add to the realism.

63-01

Pineapple with Caterpillar In Metamorphosis insectorum Surinamensium…(1719), German naturalist and artist Maria Sibylla Merian documented the flora and fauna she encountered during her two-year trip to Surinam, in South America, with her daughter. Here she creatively depicts a pineapple hosting a butterfly and a red-winged insect, both shown in various stages of life.

The impact and appeal of printing technologies is at the heart of the 2012 book edited by Tom Baione, Natural Histories: Extraordinary Rare Book Selections from the American Museum of Natural History Library. Inspired by the book, the current exhibit at the museum, Natural Histories: 400 Years of Scientific Illustration from the Museum’s Library, explores the integral role illustration has played in scientific discovery through 50 large-format reproductions from seminal holdings in the Museum Library’s Rare Book collection. The exhibition is on view through October 12, 2014 at the American Museum of Natural History in New York City. All images © AMNH\D. Finnin.

63-02

Tasmanian Tiger English ornithologist and taxidermist John Gould’s images and descriptions for the three-volume work The mammals of Australia (1863) remain an invaluable record of Australian animals that became increasingly rare with European settlement. The “Tasmanian tiger” pictured here was actually a thylacine (Thylacinus cynocephalus), the world’s largest meat-eating marsupial until going extinct in 1936.

64

JUSTINE SEREBRIN, Gateway, Digital painting, 40 × 25 inches, 2013.

What Fish Oil Pills Are Hiding

DAVID SCHLEIFER

ALISON FAIRBROTHER

One Woman’s Quest to Save the Chesapeake Bay from the Dietary Supplement Industry

Julie Vanderslice thought fish were disgusting. She didn’t like to look at them. She didn’t like to smell them. Julie lived with her mother, Pat, on Cobb Island, a small Maryland community an hour south and a world away from Washington, D.C. Her neighbors practically lived on their boats in warm weather, fishing for stripers in the Chesa-peake Bay or gizzard shad in shallow creeks shaded by sycamores. Julie had grown up on five acres of woodland in Accokeek, Maryland, across the Potomac River from Mount Vernon, George Washington’s plantation home. The Potomac River wetlands in Piscataway Park were a five-minute bike ride away, on land the federal government had kept wild to preserve the view from Washington’s estate. Her four brothers and three sisters kept chickens, guinea pigs, dogs, cats, and a tame raccoon. They went fishing in the Bay as often as they could. But Julie preferred interacting with the natural world from inside, on a comfortable couch in her living room, where she read with the windows open so she could catch the briny smell of the Bay. “No books about anything slimy or smelly, thank you!” she told her family at holidays.

So it was with some playfulness that Pat’s friend Ray showed up on Julie’s doorstep one afternoon in the summer of 2010 to present her with a book called The Most Important Fish in the Sea. Ray was an avid recreational fisherman, who lived ten miles up the coast on one of the countless tiny inlets of the Chesapeake. The Chesapeake Bay has 11,684 miles of shoreline—more than the entire west coast of the United States; the watershed comprises 64,000 square miles.

“It’s about menhaden, small forage fish that grow up in the Chesapeake and migrate along the Atlantic coast. You’ll love it,” he told her, chuckling as he handed over the book. “But seriously, maybe you’ll be moved by it,” he said, his tone changing. “It says that when John Smith came here in the seventeenth century, there were so many menhaden in the Bay that he could catch them with a frying pan.”

Julie shuddered at the image of so many slippery little fish.

“Now the menhaden are vanishing,” Ray said. “I want you to read this book. I want Delegate Murphy to read this book. And I want the two of you to do something about it.”

Julie was the district liaison for Delegate Peter Murphy, a Democrat representing Charles County in the Maryland House of Delegates. She had started working for Murphy in February 2009 as a photographic intern, tasked with documenting his speeches and meetings with constituents. In her early fifties, Julie was older than the average intern. For ten years, she had sold women’s cosmetics and men’s fragrances at a Washington, D.C. branch of Woodward & Lothrop, until the legendary southern department store chain liquidated in 1995. She had moved to Texas to take a job at another department store in Houston, but it hadn’t felt right. Julie was a Marylander. She needed to live by the Chesapeake Bay. Working in local politics reconnected her to her community, and it wasn’t long before Murphy asked her to join his staff full-time. Now, she worked in his office in La Plata, the county seat, and attended events in the delegate’s stead—like the dedication of a new volunteer firehouse on Cobb Island or the La Plata Warriors high school softball games.

Julie picked up the menhaden book one summer afternoon, pretty sure she wouldn’t make it past the first chapter. She examined the cover, which featured a photo of a small silvery fish with a wide, gaping mouth and a distinctive circular mark behind its eye. “This is the most important fish in the sea?” Julie muttered to herself. She settled back into her sofa and sighed. Her mother was out at a church event, probably chattering away with Ray. Connected to the main-land by a narrow steel-girder bridge, Cobb Island was a tiny spit of land less than a mile long where the Potomac River meets the Wicomico. The island’s population was barely over 1,100. What else was there to do? She turned to the first page and began to read.

For the next few days, The Most Important Fish in the Sea followed Julie wherever she went. She read it out on the porch while listening to the gently rolling waters of Neale Sound, which separated Cobb Island from the mainland. She read it in bed, struggling to keep her eyes open so she could fit in just one more chapter. She finished the book one afternoon just as Pat came through the screen door, arms laden with a bag full of groceries. Pat found Julie standing in the middle of the living room, angrily clutching the book. Pat was dumbfounded. “You don’t like to pick crab meat out of a crab!” she said. “You wear water-shoes at the beach! Here you are all worked up over menhaden!”

Menhaden are a critical link in the Atlantic food chain, and the Chesapeake Bay is critical to the fish’s lifecycle. Menhaden eggs hatch year round in the open ocean, and the young fish swim into the Chesa-peake to grow in the warm, brackish waters. Also known colloquially as bunker, pogies, or alewifes, they are the staple food for many commercially important predator fish, including striped bass, bluefish, and weakfish, which are harvested along the coast in a dozen different states, as well as for sharks, dolphins, and blue whales. Ospreys, loons, and seagulls scoop menhaden from the top of the water column, where the fish ball together in tight rust-colored schools. As schools of menhaden swim, they eat tiny plankton and algae. As a result of their diet, menhaden are full of nutrient-rich oils. They are so oily that when ravaged by a school of bluefish, for example, menhaden will leave a sheen of oil in their wake.

Wayne Levin

Imagine seeing what you think is a coral reef, only to realize that there is movement within the shape and that it is actually a massive school of fish. That is what happened to Wayne Levin as he swam in Hawaii’s Kealakekua Bay on his way to photograph dolphins. The fish he encountered were akule, the Hawaiian name for big-eyed scad. In the years that followed he developed a fascination with the beauty and synchronicity of these schools of akule, and he spent a decade capturing them in thousands of photographs.

Akule have been bountiful in Hawaii for centuries. Easy to see when gathering in the shallows, the dense schools form patterns, like unfurling scrolls, then suddenly contract into a vortex before unfurling again and moving on. In his introduction to Akule (2010, Editions Limited), a collection of Levin’s photos, Thomas Farber describes a photo session: “What transpired was a dance, dialogue, or courtship of and with the akule….Sometimes, for instance, he faced away from them, then slowly turned, and instead of moving away the school would…come towards him. Or, as he advanced, the school would open, forming a tunnel for him. Entering, he’d be engulfed in thousands of fish.”

Wayne Levin has photographed numerous aspects of the underwater world: sea life, surfers, canoe paddlers, divers, swimmers, shipwrecks, seascapes, and aquariums. After a decade of photographing fish schools, he turned from sea to sky, and flocks of birds have been his recent subject. His photographs are in the collections of the Museum of Modern Art, New York; the Museum of Photographic Arts, San Diego; The Honolulu Museum of Art; the Hawaii State Foundation on Culture and the Arts, Honolulu; and the Mariners’ Museum, Newport News, Virginia. His work has been published in Aperture, American Photographer, Camera Arts, Day in the Life of Hawaii, Photo Japan, and most recently LensWork. His books include Through a Liquid Mirror (1997, Editions Limited), and Other Oceans (2001, University of Hawaii Press). Visit his website at waynelevinimages.com.

Alana Quinn

25

WAYNE LEVIN, Column of Akule, 2000.

26

WAYNE LEVIN, Filming Akule, 2006.

For hundreds of years, people living along the Atlantic Coast caught menhaden for their oils. Some scholars say the word menhaden likely derives from an Algonquian word for fertilizer. Pre-colonial Native Americans buried whole menhaden in their cornfields to nourish their crops. They may have taught the Pilgrims to do so, too.

The colonists took things a step further. Beginning in the eighteenth century, factories along the East Coast specialized in cooking menhaden in giant vats to separate their nutrient-rich oil from their protein—the former for use as fertilizer and the latter for animal feed. Dozens of menhaden “reduction” factories once dotted the shoreline from Maine to Florida, belching a foul, fishy smell into the air.

Until the middle of the twentieth century, menhaden fishermen hauled thousands of pounds of net by hand from small boats, coordinating their movements with call-and-response songs derived from African-American spirituals. But everything changed in the 1950s with the introduction of hydraulic vacuum pumps, which enabled many millions of menhaden to be sucked out of the ocean each day—so many fish that companies had to purchase carrier ships with giant holds below deck to ferry the menhaden to shore. According to National Oceanic and Atmospheric Administration records, in the past sixty years, the reduction industry has fished 47 billion pounds of menhaden out of the Atlantic and 70 billion pounds out of the Gulf of Mexico.

Reduction factories that couldn’t keep up went out of business, eliminating the factory noises and fishy smells, much to the relief of the growing number of wealthy home-owners purchasing seaside homes. By 2006, every last company had been bought out, consolidated, or pushed out of business—except for a single conglomerate called Omega Protein, which operates a factory in Reedville, a tiny Virginia town halfway up the length of the Chesapeake Bay. A former petroleum company headquartered in Houston and once owned by the Bush family, Omega Protein continues to sell protein-rich fishmeal for aquaculture, animal feed for factory farms, menhaden oil for fertilizer, and purified menhaden oil, which is full of omega-3 fatty acids, as a nutritional supplement. For the majority of the last thirty years, the Reedville port has landed more fish than any other port in the continental United States by volume.

The company also owns two factories on the shores of the Gulf of Mexico, which grind up and process Gulf menhaden, the Atlantic menhaden’s faster-growing cousin. But Hurricane Katrina in 2005, followed by the 2010 Deepwater Horizon oil disaster in the Gulf of Mexico, forced Omega Protein to rely increasingly on Atlantic menhaden to make up for their damaged factories and shortened fishing seasons in the Gulf—much to the dismay of fishermen and residents along the Atlantic coast.

These days, on a normal morning in Reedville, Virginia, a spotter pilot climbs into his plane just after sunrise to scour the Chesapeake and Atlantic coastal waters, searching for reddish-brown splotches of menhaden. When he spots them, the pilot signals to ship captains, who surround the school with a net, draw it close, and vacuum the entire school into the ship’s hold.

Julie Vanderslice had never seen the menhaden boats or spotter planes, but she was horrified by the description of the ocean carnage documented in The Most Important Fish in the Sea. The author, H. Bruce Franklin, is an acclaimed scholar of American history and culture at Rutgers University, who has written treatises on everything from Herman Melville to the Vietnam War. But he is also a former deckhand who fishes several times a week in Raritan Bay, between New Jersey and Staten Island.

Julie was riveted by a passage in which Franklin describes going fishing one day for weakfish in his neighbor’s boat. Weakfish are long, floppy fish that feed lower in the water column than bluefish, which thrash about on top. Franklin’s neighbor angled his boat toward a chaotic flock of gulls screaming and pounding the air with their wings. The birds were diving into the water and fighting off muscular bluefish to be the first to reach a school of menhaden. The two men had a feeling that weakfish would be lurking below the school of menhaden, attempting to pick off fish from the bottom. But before Franklin and his neighbor could reach the school, one of Omega Protein’s ships sped past, set a purse seine around the menhaden, and used a vacuum pump to suck up hundreds of thousands of fish and all the bluefish and weakfish that had been feeding on them. For days afterward, Franklin observed, there were hardly any fish at all in Raritan Bay.

That moment compelled Franklin to uncover the damage Omega Protein was doing up and down the coast. The company’s annual harvest of between a quarter and a half billion pounds of menhaden had effects far beyond depleting the once-plentiful schools of little fish. Scientists and environmental advocates contended that by vacuuming up menhaden for fishmeal and fertilizer, Omega Protein was pulling the linchpin out of the Atlantic ecosystem: starving predator fish, marine mammals, and birds; suffocating sea plants on the ocean floor; and pushing an entire ocean to the brink of collapse. Despite being published by a small environmental press, The Most Important Fish in the Sea was lauded in the Washington Post, The Philadelphia Inquirer, The Baltimore Sun, and the journal Science. The New York Times discussed it on its opinion pages, citing dead zones in the Chesapeake Bay and Long Island Sound where too few menhaden were left to filter algae out of the water.

After finishing the book, Julie couldn’t get menhaden out of her head. She had to get the book into Delegate Murphy’s hands. She bought a second copy, prepared a two-page summary, and plotted her strategy.

Julie didn’t see the delegate every day because she worked in his district office rather than in Annapolis, the country’s oldest state capitol in continuous legislative use. But that summer, Delegate Murphy was campaigning for re-election and was often closer to home. He was scheduled to make an appearance at a local farmer’s market in Waldorf a few weeks after Julie had finished the book. Waldorf was at the northern edge of Murphy’s district, close enough to Washington that the weekly farmers market would be crowded with an evening rush of commuters on their way home from D.C. But in the late afternoon, the delegate’s staff, decked out in yellow Peter Murphy T-shirts, nearly outnumbered the shoppers browsing for flowers and honey.

Delegate Murphy was at ease chatting with neighbors and shaking hands with constituents. He was tall and thin, with salt-and-pepper hair and lively eyes. Julie recognized his trademark campaign uniform: a blue polo shirt tucked neatly into slacks. He had been a science teacher before entering state politics, and he had a deep, calming voice. As a grandfather to two young children, he knew how to captivate a skeptical audience with a good story. Julie recalled the day she first met him, at a sparsely attended town hall meeting at Piccowaxen Middle School. He struck her immediately as a genuine, thoughtful man on the right side of the issues she cared about. Several months later, she heard that Delegate Murphy was speaking at the Democratic Club on Cobb Island and made a point to attend. Afterward, she waited for him in the receiving line. When it was her turn to speak, Julie asked if he was hiring.

28

WAYNE LEVIN, Ring of Akule, 2000.

Just a few short years later, Julie felt comfortable enough with Delegate Murphy to propose a mission. Mustering her courage as a band warmed up at the other end of the market, Julie seized her moment. “Delegate Murphy, you have to read this!” she said, pushing the book into his hands. “There’s this fish called menhaden that you’ve never heard of. One company in Virginia is vacuuming millions of them out of the Chesapeake Bay, taking menhaden out of the mouths of striped bass and osprey and bluefish and dolphins and all the other fish and animals that rely on them for nutrients. This is why recreational fishermen are always complaining about how hungry the striped bass are! This is why our Bay ecosystem is so unhinged! One company is taking away all our menhaden,” she declared. “We have to stop them.”

Delegate Murphy peered at her with a trace of a smile. “I’ll read it, Julie,” he said.

For months afterward, Julie stayed late at the office, reading everything she could find about menhaden. She learned that every state along the Atlantic Coast had banned menhaden fishing in state waters—except Virginia, where Omega Protein’s Reedville plant was based, and North Carolina, where a reduction factory had recently closed. (North Carolina would ban menhaden reduction fishing in 2012.) The largest slice of Omega Protein’s catch came from Virginia’s ocean waters and from the state’s portion of the Chesapeake Bay, preventing those fish from swimming north into Maryland’s section of the Bay and south into the Atlantic to populate the shores of fourteen other states along the coast.

Beyond the Chesapeake, Omega Protein’s Virginia-based fleet could pull as many menhaden as they wanted from federal waters, designated as everything between three and two hundred miles offshore, from Maine to Florida. Virginia was a voting member of the Atlantic States Marine Fisheries Commission (ASMFC), the agency that governs East Coast fisheries. But the ASMFC had never taken any steps to limit the amount of fish Omega Protein could lawfully catch along the Atlantic coast. Virginia’s legislators happened to be flush with campaign contributions from Omega Protein.

Julie clicked through articles on fisherman’s forums and coastal newspapers from every eastern state. She read testimony from citizens who described how the decimation of the menhaden population in the Chesapeake and in federal waters had affected the entire Atlantic seaboard. Bird watchers claimed that seabirds were suffering from lack of menhaden. Recreational fishermen cited scrawny bass and bluefish, and wondered whether they were lacking protein-packed menhaden meals. Biologists cut the stomachs of gamefish and found fewer and fewer menhaden inside. Whale watchers drove their boats farther out to sea in search of blue whales, which used to breach near the shore, surfacing open-mouthed upon oily schools of menhaden. The dead zones in the Chesapeake Bay grew larger, and some environmentalists connected the dots: menhaden were no longer plentiful enough to filter the water as they had in the past. In 2010, the ASMFC estimated that the menhaden population had declined to a record low, and was nearly 90 percent smaller than it had been twenty-five years ago.

Of course, Omega Protein had its own experts on staff, whose estimates better suited the company’s business interests. At a public hearing in Virginia about the menhaden fishery, Omega Protein spotter pilot Cecil Dameron said, “I’ve flown 42,000 miles looking at menhaden…. I’m here to tell you that the menhaden stock is in better shape than it was twenty years ago, thirty years ago. There’s more fish.”

One humid evening at the end of August, Delegate Murphy held a pre-election fundraiser and rally in his backyard, a grassy spot that sloped down toward the Potomac River. Former Senator Paul Sarbanes stopped by, and campaign staffers brought homemade noodle salads, cheeses, and a country ham. With the election less than two months away, the staff was working overtime, but they had hit their fundraising goal for the day. At the end of the event, as constituents headed to their cars, Delegate Murphy found Julie sitting at one of the collapsible tables littered with used napkins and glasses of melting ice. Julie was accustomed to standing for hours when she worked in the department store, but there was something about fundraising that made her feel like putting her feet up.

29

WAYNE LEVIN, Circling Akule, 2000.

30

WAYNE LEVIN, Rainbow Runners Hunting Akule, 2001.

“Great work tonight, Peter,” she said, wearily raising her glass to him. Julie always called him Delegate Murphy in public. But between the two of them, at the end of a long summer afternoon, it was just Peter.

He toasted and sat down beside her. Campaign staffers were clearing wilted chrysanthemums from the tables and stripping off plastic tablecloths. Peter and Julie looked across the lawn at the blue-gray Potomac as the sun began to dip in the sky.

“Listen,” he said. “I think I have an idea for a bill we could do.”

“On?”

“On menhaden.”

Julie put her drink down so quickly it sloshed onto the sticky tablecloth. She leaned forward in disbelief.

“We’ve got to try doing something about this,” Peter said.

Julie put her hand to her mouth and shook her head. “Menhaden reduction fishing has been banned in Maryland since 1931. Omega Protein is in Virginia. How could a bill in Maryland affect fishing there?”

“We don’t have any control over Virginia’s fishing industry, but we can control what’s sold in our state. I got to thinking: what if we introduced a bill that would stop the sale of products made with menhaden?”

“Do you think it would ever pass?” Julie asked.

“If we did a bill, it would first come before the Environmental Matters Committee. I think the chair of the committee would be amenable. At least we can put it out there and let people talk about it.”

Julie was overcome. He didn’t have to tell her how unusual this was. The impetus for new legislation didn’t often come from former interns—or from their fishermen neighbors.

“But I don’t know if we can win this on the environmental issues alone. What about the sport fishermen? Can we get them to come to the hearing?” Peter asked.

Julie began jotting notes on a napkin.

“Can you find out how many tourism dollars Maryland is losing because the striped bass are going hungry?”

“I’ll get in touch with the sport fishermen’s association and see if I can look up the numbers. And I’ll try to find out which companies are distributing products made from menhaden. It’s mostly fertilizer and animal feed. A little of it goes into fish oil pills, too.”

“The funny thing is, my own doctor told me to take fish oil pills a few years ago,” Peter said. He patted Julie’s shoulder and stood to wave to the last of his constituents as they disappeared down the driveway.

Doctors like Peter’s wouldn’t have recommended fish oil if a Danish doctor named Jörn Dyerberg hadn’t taken a trip across Greenland in 1970. Dyerberg and his colleagues, Hans Olaf Bang and Aase Brondum Nielsen, traveled from village to village by dogsled, poking inhabitants with syringes. They were trying to figure out why the Inuit had such a low incidence of heart disease despite eating mostly seal meat and fatty fish. Dyerberg and his team concluded that Inuit blood had a remarkably high concentration of certain types of polyunsaturated fatty acids, a finding that turned heads in the scientific community when it was published in The Lancet in 1971. The researchers argued that those polyunsaturated fatty acids originated in the fish that the Inuit ate and hypothesized that the fatty acids protected against cardiovascular disease. Those polyunsaturated fatty acids eventually came to be known as omega-3 fatty acids.

Other therapeutic properties of fish oil had been recognized long before Dyerberg’s expedition. During World War I, Edward and May Mellanby, a husband-and-wife team of nutrition scientists, found that it cured rickets, a crippling disease that had left generations of European and American children incapacitated, with soft bones, weak joints, and seizures. (The Mellanbys’ research was an improvement on the earlier work of Dr. Francis Glisson of Cambridge University, who, in 1650, advised that children with rickets should be tied up and hung from the ceiling to straighten their crooked limbs and improve their short statures.)

The Mellanbys tested their theories on animals instead of children. In their lab at King’s College for Women in London, in 1914, they raised a litter of puppies on nothing but oat porridge and watched each one come down with rickets. Several daily spoonfuls of cod liver oil reversed the rickets in a matter of weeks. Edward Mellanby was awarded a knighthood for their discovery. Although May had been an equal partner in the research, she wasn’t accorded the equivalent honor. A biochemist at the University of Wisconsin named Elmer McCollum read the Mellanbys’ research and isolated the anti-rachitic substance in the oil, which eventually came to be called vitamin D. McCollum had already isolated vitamin A in cod-liver oil, as well as vitamin B, which he later figured out was, in fact, a group of several substances. McCollum actually preferred the term “accessory food factor” rather than “vitamin.” He initially used letters instead of names because he hadn’t quite figured out the structures of the molecules he had isolated.

31

WAYNE LEVIN, School of Hellers Barracuda, 1999.

Soon, mothers were dosing their children daily with cod liver oil, a practice that continued for decades. Peter Murphy, who grew up in the 1950s, remembered being forced to swallow the stuff. The pale brown liquid stank like rotten fish, and he would struggle not to gag. Oil-filled capsules eventually supplanted the thick, foul liquid, and cheap menhaden replaced dwindling cod as the source of the oil. Meanwhile, following Dyerberg’s research into the Inuit diet, studies proliferated about the effects of omega-3 fatty acids—which originate in algae and travel up the food chain to forage fish like menhaden and on into the predator fish that eat them.

In 2002, the American Heart Association reviewed 119 of these studies and concluded that omega-3s could reduce the incidence of heart attack, stroke, and death in patients with heart disease. The AHA insisted omega-3s probably had no benefit for healthy people and suggested that eating fish, flax, walnuts, or other foods containing omega-3s was “preferable” to taking supplements. They warned that fish and fish oil pills could contain mercury, PCBs, dioxins, and other environmental contaminants. Nonetheless, they cautiously suggested that patients with heart disease “could consider supplements” in consultation with their doctors.

Americans did more than just “consider” supplements. In 2001, sales of fish oil pills were only $100 million. A 2009 Forbes story called fish oil “one supplement that works.” By 2011, sales topped $1.1 billion. Studies piled up suggesting that omega-3s and fish oil could do everything from reducing blood pressure and systemic inflammation to improving cognition, relieving depression, and even helping autistic children. Omega Protein was making most of its money turning menhaden into fertilizer and livestock feed for tasteless tilapia and factory-farmed chicken. But dietary supplements made for better public relations than animal feed. They put a friendlier, human face on the business, a face Peter and Julie were about to meet.

On a warm afternoon in March 2011, the twenty-four members of the Maryland House of Delegates Environmental Matters Committee filed into the legislature and took their seats. Delegate Murphy sat at the front of the room next to H. Bruce Franklin, author of The Most Important Fish in the Sea, who had traveled from New Jersey to testify at the hearing. Julie Vanderslice chose a spot in the packed gallery, with her neighbor Ray, who brought his copy of Franklin’s book in hopes of getting it signed. Julie brought her own copy, which she had bought already signed, but which she hoped Franklin would inscribe with a more personal message.

The Environmental Matters Committee was the first stop for Delegate Murphy’s legislation. The committee would either endorse the bill for review by the full House of Delegates, strike it down immediately, or send the bill limping back to Peter Murphy’s desk for further review—in which case, it might take years for menhaden to receive another audience with Maryland legislators. If the bill made it to the full House of Delegates, however, it might quickly be taken up for a vote before summer recess. If it passed the House, it was on to the Maryland Senate and, finally, to the Governor’s desk for signature before it became law. It could be voted down at any step along the way, and Julie knew there was a real chance the bill would never make it out of committee.

Julie had heard that Omega Protein’s lobbyists had been swarming the Capitol, taking dozens of meetings with delegates, and that the lobbyists had brought Omega Protein’s unionized fishermen with them. There was nothing like the threat of job loss to derail an environmental bill. Julie bit her thumb and surveyed the gallery.

To her right, rows of seats were filled with a few recreational anglers, conservationists, and scientists, whom Delegate Murphy’s legislative aide had invited to the hearing, but Julie didn’t see representatives from any of the region’s environmental organizations, like the Chesapeake Bay Foundation or the League of Conservation Voters. Delegate Murphy had called Julie on a Sunday to ask her to ask those organizations to submit letters in support of the bill. That type of outreach was not part of her job as district liaison, but she was happy to do it. While the organizations did support the bill in writing, none of them sent anyone to the hearing in person.

Instead, the seats were filled with fishermen from Omega Protein, who wore matching yellow shirts and sat quietly while the vice president of their local union, in a pinstripe suit, leaned over a row of chairs and spoke to them in a hushed voice. At the far side of the room, Candy Thomson, outdoors reporter at The Baltimore Sun, began jotting notes into her pad.

“We’re now going to move to House Bill 1142,” said Democratic Delegate Maggie McIntosh, chair of the Environmental Matters Committee.

As Delegate Murphy spoke, Julie shifted nervously in her seat. The legislators looked confused. She thought she saw one of them riffle through the stack of papers in front of her, as if to remind himself what a menhaden was. Julie wondered how many had even bothered to read the bill before the hearing. But Delegate Murphy knew the talking points backward and forward: the menhaden reduction industry had taken 47 billion pounds of menhaden out of the Atlantic Ocean since 1950. Omega Protein landed more fish, pound for pound, than any other operation in the continental United States. There had never been a limit on the amount of menhaden Omega Protein could legally fish using the pumps that vacuumed entire schools from the sea.

“This bill simply comes out and says that we as a state will no longer participate, regardless of the reason, in the decline of this fish,” he told the committee.

After Peter Murphy finished his opening statement, he and Bruce Franklin began taking questions. One of the delegates held up a letter from the Virginia State Senate. “It says that this industry goes back to the nineteenth century and that the plant this bill targets has been in operation for nearly a hundred years and that some employees are fourth-generation menhaden harvesters.” As she spoke, she paged through letters from the union that represented some of those harvesters and a list of products made from menhaden. “I don’t understand why we would interrupt an industry that has this kind of history, that will affect so many people. In this economy, I think this is the wrong time to take such a drastic approach to this issue.”

Delegate Murphy nodded. “We in Maryland, and particularly in Southern Maryland, grew tobacco for a lot longer than a hundred years,” he said, “but when we realized it was the wrong crop, and that it was killing people, we switched over to other alternatives. And we’re doing that to this day. What we’re saying with this is there are alternatives. You don’t have to fish this fish. This particular company, which happens to be in Virginia, does have alternatives to produce the same products.” He continued, “We have a company here in Maryland that produces the same omega-3 proteins and vitamins, and it uses algae. It grows and harvests algae. And that’s a sustainable resource.”

33

WAYNE LEVIN, Amberjacks Under a School of Akule, 2007.

34

WAYNE LEVIN, Great Barracuda Surrounded by Akule, 2002.

Another delegate, his hands clasped in front of him, addressed the chamber. “I’m sympathetic to saving this resource and to managing this resource appropriately,” he said. But, he explained, he been contacted by one of his constituents, a grandmother whose grandson Austin suffered from what she called “a rare life-threatening illness.” Glancing down at his laptop, he began reading a letter from this worried grandmother. “There is a bill due to be discussed regarding the menhaden fish. These fish supply the omega oils so vital to the Omegaven product that supplies children like Austin with necessary fats through their IV lines. Many children would have died due to liver failure from traditional soy-based fats had these omega-3s in these fish not been discovered. Can you please contact someone from the powers that be in the Maryland government and tell them not to put an end to the use of these fish and their life-sustaining oils.” The delegate closed his laptop. “This is a question from one of my constituents on a life-threatening issue. Can one of the experts address that issue?”

Bruce Franklin tried to explain that there are other sources of omega-3 besides menhaden. Delegate Murphy stepped in and offered to amend the bill to exempt pharmaceutical-grade products. But it was too late. Less than an hour after it had begun, the hearing was over. Delegate Murphy withdrew the bill for “summer study” rather than see it voted down—a likely indicator that the bill would not resurface before the legislature anytime soon, if ever. Delegate McIntosh turned to the next bill on the day’s schedule, and Omega Protein’s spokesperson and lead scientist left the gallery, smiling.

Julie turned to Ray, who was sitting beside her, angrily gripping his copy of The Most Important Fish in the Sea. She wanted to console him but felt uncertain how to begin. “Your fishermen buddies seem ready to riot in the streets,” Julie said uncertainly, gesturing at the anglers who were huddled together as they walked stiffly toward the foyer. “That story about the kid who’d die without his menhaden oil—that came out of nowhere.”

She looked again at the text of the bill. “A person may not manufacture, sell, or distribute a product or product component obtained from the reduction of an Atlantic menhaden.” It was exactly the kind of forward-thinking bill Maryland needed, and it would have sent a message to the other Atlantic states that menhaden were important enough to fight for. It had been her first real step toward making policy, but now she felt crushed by the legislature’s complete lack of will to preserve one of Maryland’s most significant natural resources. It seemed to her the delegates had acted without any attempt to understand the magnitude of the problem or the benefits of the proposed solution.

All Julie wanted to do was head back down to Cobb Island, stand on the dock, and feel the evening breeze on her face. Instead, she had to drive into the humid chaos of Washington, D.C., to spend two days sightseeing with her sister and her nephews. All weekend long, as her family traipsed from the Lincoln Memorial to the National Gallery to Ford’s Theater, she thought about what had gone wrong at the hearing. Had she and Delegate Murphy aimed too high with their bill? Did the committee members understand the complexity of the ecosystem that menhaden sustained? Even when the facts and figures are clear, sometimes a good story is too compelling. What politician could choose an oily fish over a sick child?

Barely a year after the Environmental Matters Committee hearing in Annapolis, the luster of fish oil pills began to fade. In 2010, environmental advocates Benson Chiles and Chris Manthey tested for toxic contaminants in fish oil supplements from a variety of manufacturers and found polychlorinated biphenyls, or PCBs, in many of the pills. PCBs, a group of compounds once widely used in coolant fluids and industrial lubricants, were banned in the 1970s because they decreased human liver function, caused skin ailments, and caused liver and bile duct cancers. PCBs don’t easily break down in the environment; they remain in waterways like those that empty into the Chesapeake Bay, where they get absorbed by the algae and plankton eaten by fish like menhaden.

35

WAYNE LEVIN, Pattern of Akule, 2002.

36

WAYNE LEVIN, Akule Tornado, 2000.

The test results led Chiles and Manthey to file a lawsuit, under California’s Proposition 65, that named supplement manufacturers Omega Protein and Solgar as well as retailers like CVS and GNC for failing to provide adequate warnings to consumers that the fish oil pills they were swallowing with their morning coffee contained unsafe levels of PCBs. In February 2012, Chiles and Manthey reached a settlement with some manufacturers and the trade association that represents them, called the Global Organization for EPA and DHA Omega-3s (GOED), which agreed on higher safety standards for contaminants in fish oil pills.

Meanwhile, in July 2012, The New England Journal of Medicine published a study that assessed whether fish oil pills could help prevent cardiovascular disease in people with diabetes. Of the 6,281 diabetics in the study who took the pills, the same number had heart attacks and strokes as those in the placebo group. Nearly the same number died. Were all those fish-scented burps for naught? A Forbes story asked: “Fish oil or snake oil?”

In September 2012, The Journal of the American Medical Association published even worse news. A team of Greek researchers had analyzed every previous study of omega-3 supplements and cardiovascular disease, and found that omega-3 supplementation did not save lives, prevent heart attacks, or prevent strokes. GOED, the fish oil trade association, was predictably displeased. Its executive director told a supplement industry trade journal, “Given the flawed design of this meta-analysis…, GOED disputes the findings and urges consumers to continue taking omega-3 products.” But the scientific evidence was mounting: not only were fish oil pills full of dangerous chemicals, but they probably weren’t doing much to prevent heart disease, either.

Why did these pills look so promising in 2001 and not great by 2012? The American Heart Association had always favored dietary sources of omega-3s, like fish and nuts, over pills. Jackie Bosch, a scientist at McMaster University and an author of The New England Journal of Medicine study, speculated that now that people with diabetes and heart disease take so many other medicines—statins, diuretics, ACE-inhibitors, and handfuls of other pills—the effect of fish oil may be too marginal to show any measurable benefit.

Julie wasn’t surprised when she heard about the lawsuit. She knew menhaden could soak up chemical contaminants in the waterways. She read news reports about the recent studies on fish oil pills with interest and wondered whether they would give her and Delegate Murphy any ammunition for future efforts to limit the sale of menhaden products in their state. Neither had forgotten about the lowly menhaden.

Delegate Murphy had developed the habit of searching the dietary supplements aisle each time he went to the drug-store, turning the heavy bottles of fish oil capsules in his hands and reading the ingredients. None of the bottles ever listed menhaden. Despite the settlement in the California lawsuit, fish oil manufacturers were not required—and are still not required—to label the types of fish included in supplements, making it difficult for consumers to know whether they contained menhaden oil or not. But Delegate Murphy had made it clear he wasn’t ready to take up the menhaden issue again without a reasonable chance of success. Julie didn’t press him on his decision.

Then in December 2012, increasing public pressure about the decline of menhaden finally led to a change. The Atlantic States Marine Fisheries Commission voted to reduce the harvest of menhaden by 20 percent from previous levels, a regulation that would go into effect during the 2013 fishing season. It was the first time any restriction had been placed on the menhaden industry’s operations in the Atlantic, although the cut was far less severe than independent scientists had recommended. To safeguard the menhaden’s ability to spawn without undue pressure from the industry’s pumps and nets, scientists had advised reducing the harvest by 50 to 75 percent of current catch levels. Delegate Murphy and Julie knew 20 percent wasn’t nearly enough to bring the menhaden stocks back up to support the health of the Bay. But it was a start. They liked to think their bill had moved the conversation forward a little bit.

That Christmas, down on Cobb Island, Julie was putting stamps on envelopes for her family’s annual holiday recipe exchange. She addressed one to her brother Jerry in Arkansas. He didn’t usually come back east for the holidays, preferring to fly home in the summer when his sons could fish for croakers off the dock that ran out into the Wicomico River behind Julie’s house. Jerry worked for Tyson Foods, selling chicken to restaurant chains. Julie had asked him once if Tyson fed their chickens with menhaden meal, and Jerry had admitted he wasn’t sure. Whatever the factory-farmed chickens ate, Julie wasn’t taking any chances. After the hearing on the menhaden bill, she became a vegetarian. For Christmas, she was sending her family recipes for eggless egg salad and an easy bean soup.

When she finished sealing the last envelope, Julie pulled on a turtleneck sweater and grabbed her winter coat for the short walk up to the post office. The sky was a pale, dull gray, and it smelled of snow. She had recently read Omega Protein’s latest report to its investors, and as she trudged slowly toward Cobb Island Road, a word from the text popped into her mind. Company executives had repeatedly made the point that Omega Protein was “diversifying.” They had purchased a California-based dietary supplement supplier that sourced pills that didn’t use fish products. They had begun talking about proteins that could be extracted from dairy and turned into nutritional capsules. Could it be that Omega Protein had begun to see the writing on the wall? Maybe they were starting to realize that the menhaden supply was not unlimited—and that advocates like Julie wouldn’t let them take every last one.

As she passed the Cobb Island pier, a few seagulls were circling mesh crab traps that had been abandoned on the dock—traps that brimmed with blue crabs in the summer-time. Julie pulled her coat closer around her against the chill. She thought ahead to the summer months, when the traps would be baited with menhaden and checked every few hours by local families, and the ice cream parlor would open to serve the seasonal tourists. By the end of summer, Omega Protein would be winding down its fishing season, and the company would likely have 20 percent less fish in their industrial-sized cookers than they did the year before. Would that be enough to help the striped bass and the osprey and the humpback whales? Julie wondered. And the thousands of fishermen whose livelihoods depended upon pulling healthy fish from the Chesapeake Bay? And the families up and down the coast who brought those fish home to eat?

Julie had done a lot of waiting in her time. She had waited her whole life to find a job like the one she had with Delegate Murphy. She had waited for the delegate to get excited about the menhaden. When their bill failed, she had waited for the ASMFC to pass regulations protecting the menhaden. Now she would have to wait a little longer to find out whether the ASMFC’s first effort at limiting the fishery would enable the menhaden population to recover. But there are two kinds of waiting, Julie thought. There’s the kind where you have no agency, and then there’s the kind where you are at the edge of your seat, ready to act at a moment’s notice. Julie felt she could act. And so could Ray, Delegate Murphy, Bruce Franklin, and the sport fishermen, who now cared even more about the oily little menhaden. For now, at least until the end of the fishing season, that had to be enough. They would just have to wait and see.

David Schleifer (david.schleifer@gmail.com) is a senior research associate at Public Agenda, a nonpartisan, nonprofit research and engagement organization. Alison Fairbrother (alison@publictrustproject.org) is the executive director of the Public Trust Project.

Editor’s Journal: Telling Stories

KEVIN FINNERAN

“The universe is composed of stories, not of atoms” Muriel Rukeyser wrote in her poem “The Speed of Darkness.” Good stories are not merely the collection of individual events; they are a means of expressing ideas in concrete terms at human scale. They have the ability to accomplish the apparently simple but rarely achieved task of seamlessly linking the general with the specific, of giving ideas flesh and blood.

This edition of Issues includes three articles that use narrative structure to address important science and technology policy topics. They are the product of a program at Arizona State University that was directed by writer and teacher Lee Gutkind and funded by the National Science Foundation. Begun in 2010, the Think, Write, Publish program began by assembling two dozen young writers and scientists/engineers to work in teams to prepare articles that use a narrative approach to engage readers in an S&T topic. Lee organized a training program that included several workshops and opportunities to meet with editors from major magazines and book publishers. Several of the writer/expert teams prepared articles that were published in Issues: Mary Lee Sethi and Adam Briggle on the federal Bioethics Commission, Jennifer Liu and Deborah Gardner on the global dimension of medicine and ethics, Gwen Ottinger and Rachel Zurer on environmental monitoring, and Ross Carper and Sonja Schmid on small modular nuclear reactors.

Encouraged by the enthusiasm for the initial experiment, they decided to do it again. A second cohort, again composed of 12 scholars and 12 writers, was selected in 2013. They participated in two week-long workshops. At the first meeting teams were formed, guest editors and writers offered advice, Lee and his team provided training, and the teams began their work. Six months later the teams returned for a second week-long workshop during which they worked intensively revising and refining the drafts they had prepared. They also received advice from some of the participants from the first cohort.

They learned that policy debates do not lend themselves easily to narrative treatments, that collaborative writing is difficult, that professional writers and scholars approach the task of writing very differently and have sometimes conflicting criteria for good writing. But they persisted, and now we are proud to present three of the articles that emerged from the effort. Additional articles written by the teams can be found at http://thinkwritepublish.org/.

These young authors are trailblazers in the quest to find a way to make the public more informed and more engaged participants in science, technology, and health policy debates. They recognize narrative as a way to ground and humanize discussions that are too often conducted in abstract and erudite terms. We know that the outcomes of these debates have results that are anything but abstract, and that it is essential that people from all corners and levels of society participate. Effective stories that inform and engage readers can be a valuable means of expanding participation in science policy development. If you want to see how, you can begin reading on the next page.

From the Hill

Details of administration’s proposed FY2015 budget

Officially released March 4, President Obama’s FY2015 budget makes clear the challenges for R&D support currently posed by the Budget Control Act spending caps. With hardly any additional room available in the discretionary budget above FY 2014 levels, and with three-quarters of the post-sequester spending reductions still in place overall, many agency R&D budgets remain essentially constant. Some R&D areas such as climate research and support for fundamental science that have been featured in past budgets did not make much fiscal headway in this year’s request. Nevertheless, the administration has managed to shift some additional funding to select programs such as renewable energy and energy efficiency, advanced manufacturing, and technology for infrastructure and transportation.

An added twist, however, is the inclusion of $5.3 billion in additional R&D spending above and beyond the current discretionary caps that is part of what the administration calls the Opportunity, Growth, and Security Initiative (OGSI). This extra funding would make a significant difference for science and innovation funding throughout government. Congress, however, has shown little interest in embracing it.

Without the OGSI the president’s proposed FY2015 budget includes a small reduction in R&D funding in constant dollars. Current AAAS estimates place R&D in the president’s request at $136.5 billion (see Table 1). This represents a 0.7% increase above FY 2014 levels but is actually a slight decrease when the 1.7% inflation rate is considered. It also represents a 3.8% increase above FY 2013 post-sequester funding levels, but with inflation the total R&D budget is almost unchanged from FY 2013.

The Department of Defense (DOD) R&D is proposed at $70.8 billion or 0.3% above FY 2014 levels. This is due to boosts in R&D at the National Nuclear Security Administration (NNSA) that offsets cuts in other DOD R&D programs. Nondefense R&D is proposed at $65.7 billion, a 1.2% increase above FY 2014 levels.

Total research funding, which includes basic and applied research, would fall to $65.9 billion, a cut of $1.1 billion or 1.7% below FY 2014 levels, and only about 1.1% above FY 2013 post-sequester levels after inflation. This is in large part due to cuts in defense and National Aeronautics and Space Administration (NASA) research activities, though some NASA research has also been reclassified as development, which pushes the number lower without necessarily reflecting a change in the actual work.

Conversely, development activities would increase by $2.1 billion or 3.2%, due to increases in these activities at DOD, NASA, and the Department of Energy (DOE).

The $56-billion OGSI initiative would include $6.3 billion for R&D, which would mean a 4.6% increase from FY2014.

R&D spending should be understood in the larger context of the federal budget. The discretionary spending (everything except Medicare, Medicaid, and Social Security) share of the budget has shrunk to 30.4% and is projected to reach 24.6% in 2019. R&D outlays as a share of the budget would drop to 3.4%, a 50-year low.

Under the president’s proposal only a few agency R&D budgets, including those at DOE, the U.S. Geological Survey (USGS), the National Institute of Standards and Technology (NIST), and the Department of Transportation (DOT), stay ahead of inflation, but many will be above sequester levels, and total R&D requests increased more than the average for discretionary spending.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

TABLE 1

R&D in the FY 2015 budget by agency (budget authority in millions of dollars)

20

Source: OMB R&D data, agency budget justifications, and agency budget documents. Does not include Opportunity, Growth, and Security Initiative funding (see Table II-20). Note: The projected GDP inflation rate between FY 2014 and FY 2015 is 1.7 percent. All figures are rounded to the nearest million. Changes calculated from unrounded figures.

At DOE, the energy efficiency, renewable energy, and grid technology programs are marked for significant increases, as is the Advanced Research Projects Agency-Energy (ARPA-E); the Office of Science is essentially the same; and nuclear and fossil energy technology programs are reduced.

The proposed budget includes an increase of more than 20% for NASA’s Space Technology Directorate, which seeks rapid public-private technology development. Cuts are proposed in development funding for the next-generation crew vehicle and launch system.

Department of Agriculture extramural research would receive a large increase even as the agency’s intramural research funding is trimmed, though significantly more funding for both is contained within the OGSI.

The DOD science & technology budget, which includes basic and applied research, advanced technology development, and medical research funded through the Defense Health Program, would be cut by $1.4 billion or 10.3% below FY 2014 levels. A 57.8% cut in medical research is proposed, but Congress is likely to restore much of this funding, as it has in the past. The Defense Advanced Research Projects Agency is slated for a small increase.

The National Institute of Health (NIH) would continue on a downward course. The president’s request would leave the NIH budget about $4.1 billion in constant dollars or 12.5% below the FY 2004 peak. Some of the few areas seeing increased funding at NIH would include translational science, neuroscience and the BRAIN Initiative, and mental health. The additional OGSI funding would nearly, but not quite, return the NIH budget to pre-sequestration levels.

The apparently large cut in Department of Homeland Security (DHS) R&D funding is primarily explained by the reduction in funding for construction of the National Bio and Agro-Defense Facility, a Biosafety Level 4 facility in Kansas. Other DHS R&D activities would be cut a little, and the Domestic Nuclear Detection Office would receive a funding increase.

One bright note in this constrained fiscal environment is that R&D spending fared better than average in the discretionary budget, Looking ahead, there is much more cause for concern. Unless Congress takes action, the overall discretionary budget will return to sequester levels in FY2016 and remain there for the rest of the decade.

In brief

  • On April 24, the National Science Board issued a statement articulating concerns over some portions of the Frontiers in Innovation, Research, Science, and Technology Act (FIRST Act; H.R. 4186) which would reauthorize funding for NSF, among other things. The board expressed its “greatest” concern that “the bill’s specification of budget allocations to each NSF Directorate would significantly impede NSF’s flexibility to deploy its funds to support the best ideas in fulfillment of its mission.”
  • On April 28, the House passed the Digital Accountability and Transparency Act (S. 994; also known as the DATA Act), sending the bill to President Obama for signature. The bill seeks to improve the “availability, accuracy, and usefulness” of federal spending information by setting standards for reporting government spending on contracts, grants, etc. The legislation would also require that the Office of Management and Budget develop a two-year pilot program to evaluate reporting by recipients of federal grants and contracts and to reduce duplicative reporting requirements.
  • On April 22, the U.S. Supreme Court ruled to uphold the state of Michigan’s ban on using race as a factor in admissions for higher education institutions. In a 6-2 ruling, the Court determined that it is not in violation of the U.S. Constitution for states to prohibit public colleges and universities from using forms of racial preferences in admissions. In his opinion, Justice Anthony M. Kennedy stated: “This case is not about how the debate about racial preferences should be resolved. It is about who may resolve it. There is no authority in the Constitution of the United States or in this court’s precedents for the judiciary to set aside Michigan laws that commit this policy determination to the voters.”
  • On March 27, Senate Judiciary Committee Chairman Patrick Leahy (D-VT) and Senator John Cornyn (R-TX) introduced legislation on forensic science. The Criminal Justice and Forensic Science Reform Act (S. 2177) “promotes national accreditation and certification standards and stronger oversight for forensic labs and practitioners, as well as the development of best practices and a national forensic science research strategy.” The bill would create an Office of Forensic Science within the Office of the Deputy Attorney General at the Department of Justice and would also require that the office coordinate with NIST. It would require that forensic science personnel who work in laboratories that receive federal funding be certified in their fields and that all forensic science labs that receive federal funding be accredited according to standards set by a Forensic Science Board.

How Hurricane Sandy Tamed the Bureaucracy

ADAM PARRIS

A practical story of making science useful for society, with lessons destined to grow in importance.

Remember Hurricane Irene? It pushed across New England in August 2011, leaving a trail of at least 45 deaths and $7 million in damages. But just over a year later, even before the last rural bridge had been rebuilt, Hurricane Sandy plowed into the New Jersey–New York coast, grabbing the national spotlight with its even greater toll of death and destruction. And once again, the region—and the nation—swung into rebuild mode.

Certainly, some rebuilding after such storms will always be necessary. However, this one-two punch underscored a pervasive and corrosive aspect of our society: We have rarely taken the time to reflect on how best to rebuild developed areas before the next crisis occurs, instead committing to a disaster-by-disaster approach to rebuilding.

Yet Sandy seems to have been enough of a shock to stimulate some creative thinking at both the federal and regional levels about how to break the cycle of response and recovery that developed communities have adopted as their default survival strategy. I have witnessed this firsthand as part of a team that designed a decision tool called the Sea Level Rise Tool for Sandy Recovery, to support not just recovery from Sandy but preparedness for future events. The story that has emerged from this experience may contain some useful lessons about how science and research can best support important social decisions about our built environment. Such lessons are likely to be of increasing importance as predicted climate change brings the inevitability of extreme weather events.

A story of cooperation

In the wake of Sandy, pressure mounted at all levels, from local to federal, to address one question: How would we rebuild? This question obviously has many dimensions, but one policy context cuts across them all. The National Flood Insurance Program provides information on flood risk that developers, property owners, and city and state governments are required to use in determining how to build and rebuild. Run by the Federal Emergency Management Agency (FEMA), the program provides information on the height of floodwaters, known as flood elevations, that can be used to delineate on a map where it is more or less risky to build. Flood elevations are calculated based on analysis of how water moves over land during storms of varying intensity, essentially comparing the expected elevation of the water surface to that of dry land. FEMA then uses this information to create flood insurance rate maps, and insurers use the maps to determine the cost of insurance in flood-prone areas. The cost of insurance and the risk of flooding are major factors for individuals and communities in determining how high to build structures and where to locate them to avoid serious damage during floods.

But here’s the challenge that our team faced after Sandy. The flood insurance program provided information on flood risk based only on conditions in past events, and not on conditions that may occur tomorrow. Yet coastlines are dynamic. Beaches, wetlands, and barrier islands all change in response to waves and tides. These natural features shift, even as the seawalls and levees that society builds to keep communities safe are designed to stay in place. In fact, seawalls and levees add to the complexity of the coastal environment and lead to new and different changes in coastal features. The U.S. Army Corps of Engineers implements major capital works, including flood protection and beach nourishment, to manage these dynamic features. The National Oceanic and Atmospheric Administration (NOAA) helps communities manage the coastal zone to preserve the amenities we have come to value on the coast: commerce, transportation, recreation, and healthy ecosystems, among others. And both agencies have long been doing research on another major factor of change for coastlines around the world: sea-level rise.

Any amount of sea-level rise, even an inch or two, increases the elevation of floodwaters for a given storm. Estimates of future sea-level rise are therefore a critical area of research. As Sandy approached, experts from NOAA and the Army Corps, other federal agencies, and several universities were completing a report synthesizing the state of the science on historic and future sea-level rise. The report, produced as part of a periodic updating of the National Climate Assessment, identified scenarios (plausible estimates) of global sea-level rise by the end of this century. Coupled with the best available flood elevations, the sea-level rise scenarios could help those responsible for planning and developing in coastal communities factor future risks into their decisions. This scenario-planning approach underscores a very practical element of risk management: If there’s a strong possibility of additional risk in the future, factor that into decisions today.

Few people would argue with taking steps to avoid future risk. But making this happen is not as easy as it sounds. FEMA has to gradually incorporate future flood risk information into the regulatory program even as the agency modernizes existing flood elevations and maps. The program dates back to 1968, and much of the information on flood elevations is well over 10 years old. We now have newer information on past events, more precise measurements on the elevation of land surfaces, and better understanding of how to model and map the behavior of floodwaters. We also have new technologies for providing the information via the Internet in a more visually compelling and user-specific manner. Flood elevations and flood insurance rate maps have to be updated for thousands of communities across the nation. When events like Sandy happen, FEMA issues “advisory” flood elevations to provide updated and improved information to the affected areas even if the regulatory maps are not finalized. However, neither the updated maps nor the advisory elevations have traditionally incorporated sea-level rise.

Only in 2012 did Congress pass legislation—the Biggert-Waters Flood Insurance Reform Act—authorizing FEMA to factor sea-level rise into flood elevations provided by the flood insurance program, so the agency has had little opportunity to accomplish this for most of the nation. Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

Of course, reacting to any additional flood risk resulting from higher sea levels might entail the immediate costs of building higher, stronger, or in a different location altogether. But such short-term costs are counterbalanced by the long-term benefits of health and safety and a smaller investment in maintenance, repair, and rebuilding in the wake of a disaster. So how does the federal government provide legitimate science—science that is seen by decisionmakers as reliable and legitimate—regarding future flood risk to affected communities? And how might it create incentives, financial and otherwise, for adopting additional risk factors that may mean up-front costs in return for major long-term gains?

After Sandy, leaders of government locally and nationally were quick to recognize these challenges. President Barack Obama established a Hurricane Sandy Rebuilding Task Force. Governor Mario Cuomo of New York established several expert committees to help develop statewide plans for recovery and rebuilding. Governor Chris Christie of New Jersey was quick to encourage higher minimum standards for rebuilding by adding 1 foot to FEMA’s advisory flood elevations. And New York City Mayor Michael Bloomberg created the Special Initiative on Risk and Resilience, connected directly to the city’s long-term planning efforts and to an expert panel on climate change, to build the scientific foundation for local recovery strategies.

Right now, people could be rebuilding structures with substantially more near-term risk of coastal flooding because they are using flood elevations that do not account for sea-level rise.

The leadership and composition of the groups established by the president and the mayor were particularly notable and distinct from conventional efforts. They brought expertise and emphasis that focused as strongly on preparedness for a future that is likely to look different from the present, as on responding to the disaster itself. For example, the president’s choice of Shaun Donovan, secretary of the Department of Housing and Urban Development (HUD), to chair the federal task force implicitly signaled a new focus on ensuring that urban systems will be resilient in the face of future risks.

New York City’s efforts have been exemplary in this regard. The organizational details are complex, but there is one especially crucial part of the story that I want to tell. When Mayor Bloomberg created the initiative on risk and resilience, he also reconvened the New York City Panel on Climate Change (known locally as the NPCC), which had been begun in 2008 to support the formulation of a long-term comprehensive development and sustainability plan, called PlaNYC. All of these efforts, which were connected directly to the Mayor’s Office of Long-term Planning and Sustainability, were meant to be forward-looking and to integrate contributions from experts in planning, science, management, and response.

Tying the response to Sandy to the city’s varied efforts signaled a new approach to post-disaster development that embraced long-term resilience: the capacity to be prepared for an uncertain future. In particular, the NPCC’s role was to ensure that the evolving vulnerabilities presented by climate change would play an integral part in thinking about New York in the post-Sandy era. To this end, in September 2012, the City Council of New York codified the operations of the NPCC into the city’s charter, calling for periodic updates of the climate science information. Of course, science-based groups such as the climate panel should be valuable for communities and decisionmakers thinking about resilience and preparedness, but often they are ignored. Thus, another essential aspect of New York’s approach was that the climate panel was not just a bunch of experts speaking from a pulpit of scientific authority, but it also had members representing local and state government working as full partners.

Within NOAA, there are programs designed to improve decisions on how to build resilience into society, given the complex and uncertain interactions of a changing society and a changing environment. These programs routinely encourage engagement among different scales and sectors of government and resource management. For example, NOAA’s Regional Integrated Sciences and Assessments (RISA) program provides funding for experts to participate in New York’s climate panel to develop risk information that informs both the response to Sandy and the conceptual framework for adaptively managing long-term risk within PlaNYC. Through its Coastal Services Center, NOAA also provides scientific tools and planning support for coastal communities facing real-time challenges. When Sandy occurred, the center offered staff support to FEMA’s field offices that were the local hubs among emergency management and disaster relief. Such collaboration and interactions between the RISA experts, the center staff, and the FEMA field offices fostered social relations that allowed for coordination in developing the Sea Level Rise Tool for Sandy Recovery.

In still other efforts, representatives of the president’s Hurricane Sandy Rebuilding Task Force and the Council on Environmental Quality were working with state and local leaders, including staff from the New York City’s risk and resilience initiative. The leaders of the New York initiative were working with representatives of NOAA’s RISA program, as well as with experts on the NPCC who had participated in producing the latest sea-level rise scenarios for the National Climate Assessment. The Army Corps participated in the president’s Task Force and also contributed to the sea-level rise scenarios report. This complex organizational ecology also helped create a social network among professionals in science, policy, and management charged with building a tool that can identify the best available science on sea-level rise and coastal flooding to support recovery for the region.

We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment.

Before moving on to the sea-level rise tool itself, I want to point out important dimensions of this social network and the context that facilitated such complex organizational coordination. Sandy presented a problem that motivated people in various communities of practice to work with each other. We all knew each other, wanted to help recovery efforts, and understood the limitations of the flood insurance program. In the absence of events such as Sandy, it is difficult to find such motivating factors; everyone is busy with his or her day-to-day responsibilities. Disaster drew people out of their daily routines with a common and urgent purpose. Moreover, programs such as RISA have been doing research not just to provide information on current and future risks associated with climate, but also to understand and improve the processes by which scientific research can generate knowledge that is both useful and actually used. Research on integrated problems and management across institutions and sectors is undervalued; how best to organize and manage such research is poorly understood in the federal government. Those working on this problem themselves constitute a growing community of practice.

Communities need to be able to develop long-term planning initiatives, such as New York’s PlaNYC, that are supported by bodies such as the city’s climate change panel. In order to do so, they have to establish networks of experts with whom they can develop, discuss, and jointly produce knowledge that draws on relevant and usable scientific information. But not all communities have the resources of New York City or the political capacity to embrace climate hazards. If the federal government wishes to support other communities in better preparing people for future disasters, it will have to support the appropriate organizational arrangements—especially those that can bridge boundaries between science, planning, and management.

Rising to the challenges

For more than two decades, the scientific evidence has been strong enough to enable estimates of sea-level rise to be factored into planning and management decisions. For example, NOAA maintains water-level stations (often referred to as tide gages) that document sea-level change, and over the past 30 years, 88% of the 128 stations in operation have recorded a rise in sea level. Based on such information, the National Research Council published a report in 1987 estimating that sea level would rise between 0.5 and 1.5 meters by 2100. More recent estimates suggest it could be even higher.

Of course, many coastal communities have long been acutely aware of the gradual encroachment of the sea on beaches and estuaries, and the ways in which hurricanes and tropical storms can remake the coastal landscape. So, why is it so hard to decide on a scientific basis for incorporating future flood risk into coastal management and development?

For one thing, sea-level rise is different from coastal flooding, and the science pertaining to each is evolving somewhat independently. Researchers worldwide are analyzing the different processes that contribute to sea-level rise. They are thinking about, among other things, how the oceans will expand as they absorb heat from the atmosphere; about how quickly ice sheets will melt and disintegrate in response to increasing global temperature, thereby adding volume to the oceans; and about regional and local processes that cause changes in the elevation of the land surface independent of changes in ocean volume. Scientists are experimenting, and they cannot always experiment together. They have to isolate questions about the different components of the Earth system to be able to test different assumptions, and it is not an easy task to put the information back together again. This task of synthesizing knowledge from various disciplines and even within closely related disciplines requires interdisciplinary assessments.

The sea-level rise scenarios that our team used in designing the Sandy tool, which derived from the National Climate Assessment prepared for Congress every four years to help synthesize and summarize the state of the climate and its impacts on society, varied greatly. The scenarios were based on expert judgments from the scientific literature by a diverse team drawn from the fields of climate science, oceanography, geology, engineering, political science, and coastal management, and representing six federal agencies, four universities, and one local resource management organization. The scenarios report provided a definitive range of 8 inches to 6.6 feet by the end of the century. (One main reason for such different projections is the current inadequate understanding of the rate at which the ice sheets in Greenland and Antarctica are melting and disintegrating in response to increasing air temperature.) The scenarios were aimed at two audiences: regional and local experts who are charged with addressing variations in sea-level change at specific locations, and national policymakers who are reconsidering potential impacts beyond any individual community, city, or even state.

But wasn’t the choice of the experts who prepared the scenarios to present such a broad range of sea-level rise estimates simply adding to policymakers’ uncertainty about the future? The authors addressed this possible concern by associating risk tolerance—the amount of risk one would be willing to accept for a particular decision—with each scenario. For example, they said that anyone choosing to use the lowest scenario is accepting a lot of risk, because there is a wealth of evidence and agreement among experts that sea-level rise will exceed this estimate by the end of the century unless (and possibly even if) aggressive global emissions reduction measures are taken immediately. On the other hand, they said that anyone choosing to use the highest scenario is using great caution, because there is currently less evidence to support sea-level rise of this magnitude by the end of the century (although it may rise to such levels in the more distant future).

Thus, urban planners may want to consider higher scenarios of sea-level rise, even if they are less likely, because this approach will enable them to analyze and prepare for risks in an uncertain future. High sea-level rise scenarios may even provide additional factors of safety, particularly where the consequences of coastal flood events threaten human health, human safety, or critical infrastructure—or perhaps all three. The most likely answer might not always be the best answer for minimizing, preparing for, or avoiding risk. Framing the scenarios in this fashion helps avoid any misperceptions about exaggerating risk. But more importantly, it supports deliberation in planning and making policy about the basis for setting standards and policies and for designing new projects in the coastal zone. The emphasis shifts to choices about how much or how little risk to accept.

In contrast to the scenarios developed for the National Climate Assessment, the estimates made by the New York City climate panel addressed regional and local variations in sea-level rise and are customized to support design and rebuilding decisions in the city that respond to risks over the next 25 to 45 years. They were developed after Sandy by integrating scientific findings published just the previous year—after the national scenarios report was released. The estimates were created using a combination of 24 state-of-the-art global climate models, observed local data, and expert judgment. Each climate model can be thought of as an experiment that includes different assumptions about global-scale processes in the Earth system (such as changes in the atmosphere). As with the national scenarios report, then, the collection of models provides a range of estimates of sea-level rise that in total convey a sense of the uncertainties. The New York City climate panel held numerous meetings throughout the spring of 2013 to discuss the model projections and to frame its own statements about the implications of the results for future risks to the city arising from sea-level rise (e.g., changes in the frequency of coastal flooding due to sea-level rise). These meetings were attended by not only physical and social scientists but also by decisionmakers facing choices at all stages of the Sandy rebuilding process, from planning to design to engineering and construction.

As our team developed the sea-level rise tool, we found minimal difference between the models used by the New York climate panel and the nationally produced scenarios. At most, the extreme national scenarios and the high-end New York projections were separated by 3 inches, and the intermediate scenarios and the mean model values were separated by 2 inches. This discrepancy is well within the limits of accuracy reflected in current knowledge of future sea-level rise. But small discrepancies can make a big difference in planning and policymaking.

New York State regulators evaluating projects proposed by organizations that manage critical infrastructure, such as power plants and wastewater treatment facilities, look to science vetted by the federal government as a basis for approving new or rebuilt infrastructure. Might the discrepancies between the scenarios produced for the National Climate Assessment and the projections made by the NPCC, however small, cause regulators to question the scientific and engineering basis for including future sea-level rise in their project evaluations? Concerned about this prospect, the New York City Mayor’s Office wanted the tool to use only the projections of its own climate panel.

The complications didn’t stop there. In April 2013, HUD Secretary Donovan announced a Federal Flood Risk Reduction Standard, developed by the Hurricane Sandy Rebuilding Task Force, for federal agencies to use in their rebuilding and recovery efforts in the regions affected by Sandy. The standard added 1 foot to the advisory flood elevations provided by the flood insurance program. Up to that point, our development team had been working in fairly confidential settings, but now we had to consider additional questions. Would the tool be used to address regulatory requirements of the flood insurance program? Why use the tool instead of the advisory elevations or the Federal Flood Risk Reduction Standard? How should decisionmakers deal with any differences between the 1-foot advisory elevation and the information conveyed by the tool? We spent the next two months addressing these questions and potential confusion over different sets of information about current and future flood risk.

Our team—drawn from NOAA, the Army Corps, FEMA, and the U.S. Global Change Research Program—released the tool in June 2013. It provides both interactive maps depicting flood-prone areas and calculators for estimating future flood elevations, all under different scenarios of sea-level rise. Between the time of Secretary Donovan’s announcement and the release of the tool, the team worked extensively with representatives from FEMA field offices, the New York City climate panel, the New York City Mayor’s Office, and the New York and New Jersey governors’ offices to ensure that the choices about the underlying scientific information were well understood and clearly communicated. The social connections were again critical in convening the right people from the various levels of government and the scientific and practitioner communities.

During this period, the team made key changes in how the tool presented information. For example, the Hurricane Sandy Rebuilding Task Force approved the integration of sea-level rise estimates from the New York climate panel into the tool, providing a federal seal of approval that could give state regulators confidence in the science. This decision also helped address the minimal discrepancies between the long-term scenarios of sea-level rise made for the National Climate Assessment and the shorter-term estimates made by the New York climate panel. The President’s Office of Science and Technology Policy also approved expanding access to the tool via a page on the Global Change Research Program’s Web site [http://www.globalchange.gov/what-we-do/assessment/coastal-resilience-resources]. This access point helped distinguish the tool as an interagency product separate from the National Flood Insurance Program, thus making clear that its use was advisory, not mandated by regulation. Supporting materials on the Web site (including frequently asked questions, metadata, planning context, and disclaimers, among others) provided background detail for various user communities and also helped to make clear that the New York climate panel sea-level rise estimates were developed through a legitimate and transparent scientific process.

The process of making the tool useful for decisionmakers involved diverse players in the Sandy recovery story discussing different ideas about how people and organizations were considering risk in their rebuilding decisions. For example, our development team briefed a diverse set of decisionmakers in the New York and New Jersey governments to facilitate deliberations about current and future risk. Our decision to use the New York City climate panel estimates in the tool helped to change the recovery and rebuilding process from past- to future-oriented, not only because the science was of good quality but because integration of the panel’s numbers into the tool brought federal, state, and city experts and decisionmakers together, while alleviating the concerns of state regulators about small discrepancies between different sea-level rise estimates.

In 2013, New York City testified in a rate case (the process by which public utilities set rates for consumers) and called for Con Edison (the city’s electric utility) and the Public Service Commission to ensure that near-term investments are made to fortify utility infrastructure assets. Con Edison has planned for $1 billion in resiliency investments that address future risk posed by climate change. As part of this effort, the utility has adopted a design criteria that uses FEMA’s flood insurance rate maps that are based on 100-year flood elevations, plus 3 feet to account for a high-end estimate of sea-level rise by mid-century. This marked the first time in the country that a rate case explicitly incorporated consideration of climate change.

New York City also passed 16 local laws in 2013 to improve building codes in the floodplain, to protect against future risk of flooding, high winds, and prolonged power outages. For example, Local Law 96/2013 adopted FEMA’s updated flood insurance rate maps with additional safety standards for some single-family homes, based on sea-level rise as projected by the NPCC.

Our development team would never have known about New York City’s need to develop a rate case with federally vetted information on future risk, if we had not worked with officials from the city’s planning department. Engaging city and state government officials was useful not just for improving the clarity and purpose of the information in the tool. It was also useful for choosing what information would be included in the tool to enable a comprehensive and implementable strategy.

The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used.

Different scales of government—local, state, and federal—have to be able to lead processes for bringing appropriate knowledge and standards into planning, design, and engineering. Conversely, all scales of government need to validate the standards revealed by these processes, because they all play a role in implementation.

Building resilience capacity

This complex story has a particularly important yet unfamiliar lesson: Planning departments are key partners in helping break the cycle of recovery and response, and in helping people adopt lessons learned from science into practice. Planners at different levels of government convene different communities of practice and disciplinary expertise around shared challenges. Coincidentally, scientific organizations that cross the boundaries between these different communities—such as the New York City climate panel and the team that developed the sea-level rise tool—can also encourage those interactions. As I’ve tried to illustrate, planning departments convene scientists and decisionmakers alike to work across organizational boundaries that under normal circumstances help to define their identities. These are important ingredients for preparing for future natural disasters and increasing our resilience to them over the long term, and yet this type of science capacity is barely supported by the federal government. How might the lessons from the Sandy Sea Level Rise Recovery Tool and Hurricane Sandy be more broadly adopted to help the nation move away from disaster-by-disaster policy and planning? Here are two ideas to consider in the context of coastal resilience.

First, re-envision the development of resilient flood standards as planning processes, not just numbers or codes.

Planning is a comprehensive and iterative function in government and community development. Planners are connected to or leading the development of everything from capital public works projects to regional plans for ecosystem restoration. City waterfronts, wildlife refuges and restored areas, and transportation networks all draw the attention of planning departments.

In their efforts, planners seek to keep development goals rooted in public values, and they are trained, formally and informally, in the process of civic engagement, in which citizens have a voice in shaping the development of their community. Development choices include how much risk to accept and whether or how the federal government regulates those choices. For this reason, planners maintain practical connections to existing regulations and laws and to the management of existing resources. Their position in the process of community development and resource management requires planners to also be trained in applying the results of research (such as sea-level rise scenarios) to design and engineering. Over the past decade, many city, state, and local governments have either explicitly created sustainability planner positions in high levels (such as mayors’ or governors’ offices) or reframed their planning departments to emphasize sustainability, as in the case of New York City. The planners in these positions are incredibly important for building resilience into urban environments; not because they see the future, but because they provide a nucleus for convening the diverse constituencies from which visions of, and pathways to, the future are imagined and implemented.

If society is to be more resilient, planners must be critical actors in government. We cannot expect policymakers and the public to simply trust or comprehend or even find useful what we learn from science. We have to reconcile what we learn from science with the practical realities we face in an increasingly populated and stressed environment. And yet, despite their critical role in achieving resilience, many local planning departments across the country have been eliminated during the economic downturn.

Second, configure part of our research and service networks to be flexible in response to emergent risk.

The federal government likes to build new programs, sometimes at the expense of working through existing ones, because new initiatives can be political instruments for demonstrating responsiveness to public needs. But recovery from disasters and preparation to better respond to future disasters can be supported through existing networks. Across the span of lands under federal authority, FEMA has regional offices that work with emergency managers, and NOAA supports over 50 Sea Grant colleges that engage communities in science-based discussions on issues related to coastal management. Digital Coast, a partnership between NOAA and six national, regional, and state planning and management organizations, provides timely information on coastal hazards and communities. These organizations work together to develop knowledge and solutions for planners and managers in coastal zones, in part by funding university-based science-and-assessment teams. The interdisciplinary expertise and localized focus of such teams help scientists situate climate and weather information in the context of ongoing risks such as sea-level rise and coastal flooding. All of these efforts contributed directly and indirectly to the Sea Level Rise Tool before, during, and after Hurricane Sandy.

The foundational efforts of these programs exemplify how science networks can leverage their relationships and expertise to get timely and credible scientific information into the hands of people who can benefit from it. Rather than creating new networks or programs, the nation could support efforts explicitly designed to connect and leverage existing networks for risk response and preparation. The story I’ve told here illustrates how existing relationships within and between vibrant communities of practice are an important part of the process of productively bringing science and decisionmaking together. New programs are much less effective in capitalizing on those relationships.

One way to support capacities that already exist would be to anticipate the need to distribute relief funds to existing networks. This idea could be loosely based on the Rapid Response Research Grants administered by the National Science Foundation, with a couple of important variations from its usual focus on supporting basic research. Agencies could come together to identify a range of planning processes supported by experts who work across communities of practice to ensure a direct connection to preparedness for future natural disasters of the same kind. These priority-setting exercises might build on the interagency discussions that occur as part of the federal Global Change Research Program. Also, since any such effort would require engagement between decisionmakers and scientists, recipients of this funding would be asked to report on the nature of additional, future engagement. What further engagement is required? Who are the critical actors, and are they adequately supported to play a role in resilience efforts? How are those networks increasing resilience over time? Gathering information about questions such as these is critical for the federal government to make science policy decisions that support a sustainable society.

Working toward a collective vision

The shift from reaction and response to preparedness seems like common sense, but as this story illustrates, it is complicated to achieve. One reaction to this story might be to replicate the technology in the sea-level rise tool or to apply the same or similar information sets elsewhere. The federal government has already begun such efforts, and this approach will supply people with better information.

Yet across the country, there are probably hundreds of similar decision tools developed by universities, nongovernmental organizations, and businesses that depict coastal flooding resulting from sea-level rise. The key difference in the development of the Sandy recovery tool was the intensive and protracted social process of discussing what information went into it and how it could be used. By connecting those discussions to existing planning processes, we reached different scales of government with different responsibilities and authority for reaching the overarching goal of developing more sustainable urban and coastal communities.

This story suggests that the role of science in helping society to better manage persistent environmental problems such as sea-level rise is not going to emerge from research programs isolated from the complex social and institutional settings of decisionmaking. Science policies aimed at achieving a more sustainable future must increasingly emphasize the complex and time-consuming social aspects of bringing scientific advance and decisionmaking into closer alignment.

Adam Parris is program manager of Regional Integrated Sciences and Assessments at the National Oceanic and Atmospheric Administration.

Breaking the Climate Deadlock

DAVID GARMAN

KERRY EMANUEL

BRUCE PHILLIPS

Developing a broad and effective portfolio of technology options could provide the common ground on which conservatives and liberals agree.

The public debate over climate policy has become increasingly polarized, with both sides embracing fairly inflexible public positions. At first glance, there appears little hope of common ground, much less bipartisan accord. But policy toward climate change need not be polarizing. Here we offer a policy framework that could appeal to U.S. conservatives and progressives alike. Of particular importance to conservatives, we believe, is the idea embodied in our framework of preserving and expanding, rather than narrowing, societal and economic options in light of an uncertain future.

This article reviews the state of climate science and carbon-free technologies and outlines a practical response to climate deadlock. Although it may be difficult to envision the climate issue becoming depoliticized to the point where political leaders can find common ground, even the harshest positions at the polar extremes of the current debate need not preclude the possibility.

We believe that a close look at what is known about climate science and the economic competitiveness of low-carbon/carbon-free technologies—which include renewable energy, advanced energy efficiency technologies, nuclear energy, and carbon capture and sequestration systems (CCS) for fossil fuels—may provide a framework that could even be embraced by climate skeptics willing to invest in technology innovation as a hedge against bad climate outcomes and on behalf of future economic vitality.

Most atmospheric scientists agree that humans are contributing to climate change. Yet it is important to also recognize that there is significant uncertainty regarding the pace, severity, and consequences of the climate change attributable to human activities; plausible impacts range from the relatively benign to globally catastrophic. There is also tremendous uncertainty regarding short-term and regional impacts, because the available climate models lack the accuracy and resolution to account for the complexities of the climate system.

Although this uncertainty complicates policymaking, many other important policy decisions are made in conditions of uncertainty, such as those involving national defense, preparation for natural disasters, or threats to public health. We may lack a perfect understanding of the plans and capabilities of a future adversary or the severity and location of the next flood or the causes of a new disease epidemic, but we nevertheless invest public resources to develop constructive, prudent policies and manage the risks surrounding each.

Reducing atmospheric concentrations of greenhouse gases (GHGs) would require widespread deployment of carbon-free energy technologies and changes in land-use practices. Under extreme circumstances, addressing climate risks could also require the deployment of climate remediation technologies such as atmospheric carbon removal and solar radiation management. Unfortunately, leading carbon-free electric technologies are currently about 30 to 290% more expensive on an unsubsidized basis than conventional fossil fuel alternatives, and technologies that could remove atmospheric carbon from the atmosphere or mitigate climate impacts are mostly unproven and some may have dangerous consequences. At the same time, the pace of technological change in the energy sector is slow; any significant decarbonization will unfold over the course of decades. These are fundamental hurdles.

It is also reasonably clear, particularly after taking into account the political concerns about economic costs, that widespread deployment of carbon-free technologies will not take place until diverse technologies are fully demonstrated at commercial scale and the cost premium has been reduced to a point where the public views the short-term political and economic costs as being reasonably in balance with plausible longer-term benefits.

Given these twin assessments, we propose a practical approach to move beyond climate deadlock. The large cost premium and unproven status of many technologies point to a need to focus on innovation, cost reduction, and successfully demonstrating multiple strategically important technologies at full commercial scale. At the same time, the uncertainty of long-term climate projections, together with the 1000+ year lifetime of CO2 in the atmosphere, argues for a measured and flexible response, but one that can be ramped up quickly.

This can be done by broadening and intensifying efforts to develop, fully demonstrate, and reduce the cost of a variety of carbon-free energy and climate remediation technologies, including carbon capture and sequestration and advanced nuclear, renewable, and energy efficiency technologies. In addition, atmospheric carbon removal and solar radiation management technologies should be carefully researched.

Conservatives have typically been strong supporters of fundamental government research, as well as technology development and demonstration in areas that the private sector does not support, such as national security and health. Also, even the most avowed climate skeptic will often concede that there are risks of inaction, and that it is prudent for national and global leaders to hedge against those risks, just as a prudent corporate board of directors will hedge against future risks to corporate profitability and solvency. Moreover, increasing concern about climate change abroad suggests potentially large foreign markets for innovative energy technologies, thus adding an economic competitiveness rationale for investment that does not depend on one’s assessment of climate risk.

Some renewed attention is being devoted to innovation, but funding is limited and the scope of technologies is overly constrained. Our suggested policy approach, in contrast, would involve a three- to fivefold increase in R&D and demonstration spending in both the public and private sectors, including possible new approaches that involve more than simply providing the funding through traditional channels such as the Department of Energy (DOE) and the national labs.

Investing in the development of technology options is a measured, flexible approach that could also shorten the time needed to decarbonize the economy. It would give future policymakers more opportunities to deploy proven, lowercost technologies, without the commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomic. And with greater emphasis on innovation, it would allow technologies to be deployed more quickly, broadly, and cost-effectively, which would be particularly important if impacts are expected to be rapid and severe.

In addition to research, development, and demonstration (RD&D), new policy options to support technology deployment should be explored. Current deployment programs principally using the tax code have not, at least to date, successfully commercialized technologies in a widespread and cost-effective manner or provided strong incentives for continued innovation. New approaches are necessary.

Climate knowledge

Although new research constantly adds to the state of scientific knowledge, the basic science of climate change and the role of human-generated emissions have been reasonably well understood for at least several decades. Today, most climate scientists agree that human-caused warming is underway. Some of the major areas of agreement include the following:

  • GHGs, which include water vapor, carbon dioxide (CO2), and other gases, trap heat in the atmosphere and warm the earth by allowing solar radiation to pass largely unimpeded to the surface of the earth and re-radiating a portion of the thermal radiation received from the earth back toward the surface. This is the “greenhouse effect.”
  • Paleoclimatology, which is the study of past climate conditions based on the geologic record, shows that changing levels of GHGs in the atmosphere have been associated with climatic change as far back as the geological record extends.
  • The concentration of CO2 in the atmosphere has increased from about 280 parts per million (ppm) in preindustrial times to about 400 ppm today, an increase of 43%. Ice core records suggest that the current level is higher than at any time over at least the past 650,000 years, whereas analysis of marine sediments suggests that CO2 levels have not been this high in at least 2.1 million years.
  • Human-made (anthropogenic) CO2 emissions, primarily resulting from the consumption of fossil fuels, are probably responsible for much of the warming observed in recent decades. Climate scientists attempting to replicate climate patterns over the past 30 years have not been able to do so without accounting for anthropogenic GHGs and sulfate aerosols.
  • CO2 emissions are also contributing to increases in surface ocean acidity, which degrades ocean habitats, including important commercial fisheries.
  • Given the current rate of global emissions, atmospheric concentrations of CO2 could reach twice the preindustrial level within the next 50 years, concentration levels our planet has not experienced in literally millions of years.
  • The global climate system has tremendous inertia. Due to the persistence of CO2 in the atmosphere and the oceans, many of the effects of climate change will not diminish naturally for hundreds of years if not longer.

About these basic points there is little debate, even from those who believe that the risks are not likely to be severe. Indeed, it is also true that long-term climate projections are subject to considerable uncertainty and legitimate scientific debate. The fundamental complexity of the climate system, in particular the feedback effects of clouds and water vapor, is the most important contributor to uncertainty. Consequently, long-term projections reflect considerable uncertainty in how rapidly, and to what extent, temperatures will increase over time. It is possible that the climate will be relatively slow to warm and that the effects of warming may be relatively mild for some time. But there is also a worrisome likelihood that the climate will warm too quickly for society to adapt and prosper—with severe or perhaps even catastrophic consequences.

Unfortunately, we should not expect the range of climate projections to narrow in a meaningful way soon; policymakers may hope for the best but must prepare for the worst.

Technology readiness

Under the best of circumstances, the risks associated with climate uncertainties could be managed, at least in part, with a mix of today’s carbon-free energy and climate remediation technologies. Carbon-free energy generation, as used in this paper, includes renewable, nuclear, and carbon capture and sequestration systems for fossil fuels such as coal and natural gas. Climate remediation technologies (often grouped together under the term “geoengineering”) include methods for removing greenhouse gases from the atmosphere (such as air capture), as well as processes that might mitigate some of the worst effects of climate change (such as solar radiation management). We note that energy efficiency or the pursuit of greater energy productivity is prudent even in the absence of climate risk, so it is particularly important in the face of it. Although this discussion focuses on electric generation, any effective decarbonization policy will also need to address emissions from the transportation sector; the residential, commercial, and industrial sectors; and land use. Similar frameworks, focused on expanding sensible options and hedging against a worst-case future, could be developed for each.

To be effective, carbon-free and climate remediation technologies and processes need to be economically viable, fully demonstrated at scale (if they have not yet been), and be capable of global deployment in a reasonably timely manner. Moreover, they would also need to be sufficiently diverse and economical to be deployed in varied regional economies across the world, ranging from the relatively low-growth developed world to the rapidly growing developing nations, particularly those with expanding urban centers such as China and India.

The list of strategically essential climate technologies is not long, yet each of these technologies, in its current state of development, is limited in important ways. Although their status and prospects vary in different regions of the world, they are either not yet fully demonstrated, not capable of rapid widespread global deployment, or unacceptably expensive relative to conventional energy technologies. These limitations are well documented, if not widely recognized or acknowledged. The limitations of current technologies can be illustrated by quickly reviewing the status of a number of major electricity-generating technologies.

On-shore wind and some other renewable technologies such as solar photovoltaic (PV) have experienced dramatic cost reductions over the past three decades. These cost reductions, along with deployment subsidies, have clearly had an impact. Between 2009 and 2013, U.S. wind output more than doubled, and U.S. solar output increased by a factor of 10. However, because ground-level winds are typically intermittent, wind turbines cannot be relied on to generate electricity whenever there is electrical demand, and the amount of generating output cannot be directly controlled in response to moment-by-moment changes in electric demand and the availability of other generating resources. As a consequence, wind turbines do not produce electrical output of comparable economic value to the output of conventional generating resources such as natural gas–fired power plants that are, in energy industry parlance, both “firm” and “dispatchable.” Furthermore, the cost of a typical or average onshore wind project in the United States, without federal and state subsidies, although now less than that of new pulverized coal plants, is still substantially more than a new gas-fired combined-cycle plant, which is generally considered the lowest-cost conventional resource in most U.S. power markets. Solar PV also suffers from its intermittency and variability, and significant penetration of solar PV can test grid reliability and complicate distribution system operation, as we are now seeing in Germany. Some of these challenges can be overcome with careful planning and coordinated execution, but the scale-up potential and economics of these resources could be improved substantially by innovations in energy storage, as well as technological improvements to increase renewables’ power yield and capacity factor.

Current light-water nuclear power technology is also more expensive than conventional natural gas generation in the United States, and suffers from safety concerns, waste disposal challenges, and proliferation risks in some overseas markets. Further, given the capital intensity and large scale of today’s commercial nuclear plants (which are commonly planned as two 1,000–megawatt (MW) generating units), the total cost of a new nuclear plant exceeds the market capitalization of many U.S. electric utilities, making sole-ownership investments a “bet-the-company” financial decision for corporate management and shareholders. Yet recent improvements in costs have been demonstrated in overseas markets through standardized manufacturing processes and economies of scale; and many new innovative designs promise further cost reductions, improved safety, a smaller waste footprint, and less proliferation risk.

CCS technology is also limited. Although all major elements of the technology have been demonstrated successfully, and the process is used commercially in some industrial settings and for enhanced oil recovery (EOR), it is only now on track to being fully demonstrated at two commercial-scale electric generation facilities under construction, one in the United States and one in Canada. And deploying CCS on existing electric power plants would reduce generation efficiency and increase production costs to the point where such CCS retrofits would be uneconomic today without large government incentives or a carbon price higher than envisioned in recent policy proposals.

The cost premium of these carbon-free technologies relative to that of conventional natural gas–fired combined cycle technology in the United States is illustrated in the next chart.

As shown, the total levelized cost of new natural gas combined-cycle generation over its expected operating life is roughly $67/MWh (MWh, megawatt-hour). In contrast, typical onshore wind projects (without federal and state subsidies and without considering the cost of backup power and other grid integration requirements) cost about $87/MWh. New gas-fired combined-cycle plants with CCS cost approximately $93/MWh and nuclear projects about $108/MWh. New coal plants with CCS, solar PV, and offshore wind projects are yet more costly. Taken together, these estimates generally point to a cost premium of $20 to $194/MWh, or 29 to 290%, for low carbon generation.

Some may argue that this cost premium is overstated because it does not reflect the cost of the carbon externality. This would be accurate from a conceptual economic perspective, but from a commercial or customer perspective, it is understated because it doesn’t account for the substantial costs of providing backup or stored power to overcome intermittency problems. The practical effect of this cost difference remains: However the cost premium might be reduced over time (whether through carbon pricing, other forms of regulation, higher fossil fuel prices, or technological innovation), the gap today is large enough to constitute a fundamental impediment to developing effective deployment policies.

This is evidenced in the United States by the wind industry’s continued dependence on federal tax incentives, the difficulty of securing federal or state funding for proposed utility-scale CCS projects, the slow pace of developing new nuclear plants, and the recent controversies in several states proposing to develop new offshore wind and coal gasification projects. The inability to pass federal climate legislation can also be seen as an indication of widespread concern about the cost of emissions reductions using existing technologies, the effectiveness of the legislation in the global long-term context, or both.

FIGURE 1

79

Source: EIA LCOE in AEO 2013

Cost considerations are even more fundamental in the developing world, where countries’ overriding economic goal is to raise their population’s standard of living. This usually requires inexpensive sources of electricity, and technologies that are only available at a large cost premium are unlikely to be rapidly or widely adopted.

Although there is little doubt that there are opportunities to reduce the cost and improve the performance of today’s technologies, the history of technological transformation in the energy sector is typically slow, unpredictable, and incremental because it widely employs long-lived capital-intensive production and infrastructure assets tied together through complex global industries—characteristics contributing to tremendous inertia. Engineering breakthroughs are rare, and new technologies typically take many decades to reach maturity at scale, sometimes requiring the development of new business models. As described by Arnulf Grübler and Nebojsa Nakicenovic, scholars at the International Institute for Applied Systems Analysis (IIASA), the world has only made two “grand” energy transitions: one from biomass to coal between 1850 and 1920, and a second from coal to oil and gas between 1920 and today. The first transition lasted roughly 70 years; the second has now lasted approximately 90 years.

A similar theme is seen in the electric generating industry. In the 130 years or so since central generating stations and the electric lightbulb were first established, only a handful of basic electric generating technologies have become commercially widespread. By far the most common of these is the thermal power station, which uses energy from either the combustion of fossil fuels (coal, oil, and gas) or a nuclear reactor to operate a steam turbine, which in turn powers an electric generator.

The conditions that made energy system transitions slow in the past still exist today. Even without political gridlock, it could well take many decades to decarbonize the global energy sector, a period of time that would produce much higher atmospheric concentrations of CO2 and ever-growing greater risks to society. This points to the importance of beginning the long transition to decarbonize the economy as soon as possible.

Policy implications

Given the uncertainties in climate projection, innovation, and technology deployment, developing a broad range of technology options can be a hedge against climate risk.

Technology “options” (as the term is used here) include carbon-free technologies that are relatively costly or not fully demonstrated but with innovation through fundamental and applied RD&D might become sufficiently reliable, affordable, and scalable to be widely deployed if and when policymakers determine they are needed. (They are not to be confused with other technologies, such as controls for non-CO2 GHGs such as methane and niche EOR applications of fossil CCS, which have already been commercialized.)

A technology option is analogous to a financial option. The investment to create the technology is akin to the cost of buying the financial option; it gives the owner the right but not the obligation to engage in a later transaction.

Examples of carbon-free generation options include small modular nuclear reactors (SMRs) or advanced Generation IV nuclear reactor technologies such as sodium or gas-cooled fast reactors; advanced CCS technologies for both coal and natural gas plants; underground coal gasification with CCS (UCG/CCS); and advanced renewable technologies. Developing options on such technologies (assuming innovation success) would reduce the cost premium of decarbonization, the time required to decarbonize the global economy, and the risks and costs of quickly scaling up technologies that are not yet fully proven.

In contrast to carbon-free generation, climate remediation options could directly remove carbon from the atmosphere or mitigate some of its worst effects. Examples include atmospheric carbon removal technologies (such as air capture and sequestration, regional or continental afforestation, and ocean iron fertilization) and solar radiation management technologies (such as stratospheric aerosol injection and cloud-whitening systems.) Because these technologies have the potential to reduce atmospheric concentrations or global average temperatures, they could (if proven) reduce, reverse, or prevent some of the worst impacts of climate change if atmospheric concentrations rise to unacceptably high levels. The challenge with this category of technologies will be to reduce the cost and increase the scale of application while avoiding unintended environmental and ecosystem harms that would offset the benefits they create.

Again, investing now in the development of such technology options would not create an obligation to deploy them, but it would yield reliable performance and cost data for future policymakers to consider in determining how to most effectively and efficiently address the climate issue. That is the essence of an iterative risk management process. Such a portfolio approach would also position the country to benefit economically from the growing overseas markets for carbon-free generation and other low-carbon technologies. It also addresses the political and economic polarization around various energy options, with some ideologies and interests focused on renewables, others on nuclear energy, and still others on CCS. A portfolio approach not only hedges against future climate uncertainties but also offers expanded opportunities for political inclusiveness and economic benefit. Over a period of time, investments in new and expanded RD&D programs would lead to new intellectual property that could help grow investments, design, manufacturing, employment, sales, and exports to serve overseas and perhaps domestic markets.

Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency-Energy (ARPA-E), the scope of technologies is far too constrained.

This portfolio approach would be a significant departure from current innovation and deployment policies. Although new attention is being devoted to energy innovation, including DOE’s Advanced Research Projects Agency–Energy (ARPA-E), the scope of technologies is far too constrained. For instance, despite its importance, a fully funded program to demonstrate multiple commercial-scale post-combustion CCS systems for both coal and natural gas generating technologies has yet to be established. Similarly, efforts to develop advanced nuclear reactor designs are limited, and there is almost no government support for climate remediation technologies. Renewable energy can make a large contribution, but numerous studies have demonstrated that it will probably be much more difficult and costly to decarbonize our electricity system within the next half century without CCS and nuclear power.

Our approach, in contrast, would involve a broader mix of technologies and innovation programs including the fossil, advanced nuclear, advanced renewable, and climate remediation technologies to maximize our chances of creating proven, scalable, and economic technologies for deployment.

The specific deployment policies needed would depend in part on the choice of technologies and the status of their development, but they would probably encompass an expanded suite of programs across the RD&D-to-commercialization continuum, including fundamental and applied R&D programs, incentives, and other means to support pilot and demonstration programs, government procurement programs, and joint international technology development and transfer efforts.

The innovation processes used by the federal government also warrant assessment and possible reform. A number of important recent studies and reports have critiqued past and current policies and put forward recommendations to accelerate innovation. Of particular note are recommendations to provide greater support for demonstration projects, expand ARPA-E, create new institutions (such as a Clean Energy Deployment Administration, a Green Bank, an Energy Technology Corporation, Public-Private Partnerships, or Regional Innovation Investment Boards), and promote competition between government agencies such as DOE and the Department of Defense. All of these deserve further attention.

Of course there will never be enough money to do everything. That’s why a strategic approach is essential. The portfolio should focus on strategically important technologies with the potential to make a material difference, based on analytical criteria such as:

  • The likelihood of becoming “proven.” Many if not most of the technologies that are likely to be considered options have not yet been proven to be reliable technologies at reasonable cost. Consequently, assessing this prospect, along with a time frame for full development and deployment, would obviously be an important decision criterion. This would not preclude “long-shot” technologies; rather it would ensure that their prospects for success be weighed with other criteria.
  • Ability to reach multi-terawatt scale. Some projections of energy demand suggest that complete decarbonization of the energy system could require 30 terawatts of carbon-free power by mid-century, given current growth patterns.
  • Relevance to Asia and the developing world. Because most of the growth in the developing world will be concentrated in large dense cities, distributed energy sources or those requiring large amounts of land area may have less relevance.
  • Ability to generate firm and dispatchable power. Electrical demands vary widely over time, often fluctuating by a factor of 2 over the course of a single day. Because electricity needs to be generated in a reliable fashion in response to demand, intermittent resources could have less relevance under conditions of deep decarbonization, unless their electrical output can be converted into a firm resource through grid-scale energy storage systems.
  • Potential to reduce costs within a reasonable range of conventional technologies. The less expensive a zero-carbon energy source is and the closer it can be managed down to cost parity with conventional resources such as gas and coal, the more likely it is that it will be rapidly adopted at scale.
  • Private-sector investment. If the private sector is adequately investing in the development or demonstration of a given technology, there would be no need for duplicative government support.
  • Potential to advance U.S. competitiveness. Investments should be sensitive to areas of energy innovation where the United States is well positioned to be a global leader.

To illustrate this further, programs might include the following.

  1. A program to demonstrate multiple CCS technologies, including post-combustion coal, pre-combustion coal, and natural gas combined-cycle technologies at full commercial scale.
  2. A program to develop advanced nuclear reactor designs, including a federal RD&D program capable of addressing each of the fundamental concerns about nuclear power. Particular attention should be given to the potential for small modular reactors (SMRs) and advanced, non–light-water reactors. A key complement to such a program would be the review and, if necessary, reform of Nuclear Regulatory Commission expertise and capabilities to review and license advanced reactor designs.
  3. Augmentation of the Department of Defense’s capabilities to sponsor development, demonstration, and scale-up of advanced energy technology projects that contribute to the military’s national security mission, such as energy security for permanent bases and energy independence for forward bases in war zones.
  4. Continued expansion of international technology innovation programs and transfer of insights from overseas manufacturing processes that have resulted in large capital cost reductions for the United States. In recent years, a number of government-to-government and business–to–nongovernmental organization partnerships have been established to facilitate such technology innovation and transfer efforts.
  5. Consideration of the use of a competitive procurement model, in which government provides funding opportunities for private-sector partners to demonstrate and deploy selective technologies that lack a current market rationale to be commercialized.

Note that this is not intended to be an exhaustive list of the efforts that could be considered, but there should be consideration of new models of public-private cooperation in technology development.

The technology options approach outlined in this paper, with its emphasis on research, development, demonstration, and innovation, serves a different albeit overlapping purpose from deployment programs such as technology portfolio standards, carbon-pricing policies, and feed-in tariffs. The options approach focuses primarily on developing improved and new technologies, whereas deployment programs focus primarily on commercializing proven technologies.

RD&D and deployment policies are generally recognized as being complementary; both would be needed to fully decarbonize the economy unless carbon mitigation was in some way highly valued in the marketplace. In practice, at least to date, technology deployment programs have not successfully commercialized carbon-free technologies in a widespread, cost-effective manner, or offered incentives to continue to innovate and improve the technology. New approaches including the use of market-based pricing mechanisms such as reverse auctions and other competitive procurement methods are likely to be more flexible, economically efficient, and programmatically effective.

Yet deploying new carbon-free technologies on a wide-spread basis over an extended period of time will be a policy challenge until the cost premium has been reduced to a level at which the tradeoffs between short-term certain costs, and long-term uncertain benefits are acceptable to the public. Until then, new deployment programs will be difficult to establish, and if they are established, they are likely to have little material impact (because efforts to constrain program costs would lead these programs to have very limited scopes) or be quickly terminated (due to high program costs), as we have seen with, for example, the U.S. Synthetic Fuels Corporation. Therefore, substantially reducing the cost premium for carbon-free energy must be a priority for both innovation and deployment programs. It is likely to be the fastest and most practical path to create a realistic opportunity to rapidly decarbonize the economy.

Although we are not proposing a specific or complete set of programs in this paper, it is fair to say that our policy approach would involve a substantial increase in energy RD&D spending—an effort that could cost between $15 billion and $25 billion per year, a three- to fivefold increase over recent energy RD&D spending levels.

This is a significant increase over historic levels but modest compared to current funding for medical research (approximately $30 billion per year) and military research (approximately $80 billion per year), in line with previous R&D initiatives over the years (such as the War on Terror, the NIH buildup in the early 2000s, and the Apollo space program), and similar to other recent energy innovation proposals.

The increase in funding would need to be paid for, requiring redirection of existing subsidies, funding a clean energy trust from federal revenues accruing from expanded oil and gas production, a modest “wires charge” on electricity rate payers, or reallocations as part of a larger tax reform effort. We are not suggesting that this would necessarily be easy, only that such investments are necessary and are not out of line with other innovation investment strategies that the nation has adopted, usually with bipartisan support. In this light, we emphasize again the political virtues of a portfolio approach that keeps technological options open and offers additional possible benefits from the potential for enhanced economic competitiveness.

In light of the uncertain but clear risk of severe climate impacts, prudence calls for undertaking some form of risk management. The minimum 50-year time period that will be required to decarbonize the global economy and the effectively irreversible nature of any climate impacts argue for undertaking that effort as soon as reasonably possible. Yet pragmatism requires us to recognize that most of the technologies needed to manage this risk are either substantially more expensive than conventional alternatives or are as yet unproven.

These uncertainties and challenges need not be confounding obstacles to action. Instead, they can be addressed in a sensible way by adopting the broad “portfolio of technology options” approach outlined in this paper; that is, by developing a diverse array of proven technologies (including carbon capture, advanced nuclear, advanced renewable, atmospheric carbon removal, and solar radiation management) and deploying the most successful ones if and when policymakers determine they are needed.

This approach would provide policymakers with greater flexibility to establish policies deploying proven, scalable, and economical technologies. And by placing greater emphasis on reducing the cost of scalable carbon-free technologies, it would allow these technologies to be deployed more quickly, broadly, and cost-effectively than would otherwise be possible. At the same time, it would not be a commitment to deploy them if they turn out to be unnecessary, ineffective, or uneconomical.

We believe that this pragmatic portfolio approach should appeal to thoughtful people across the political spectrum, but most notably to conservatives who have been skeptical of an “all-in” approach to climate that fails to acknowledge the uncertainties of both policymaking and climate change. It is at least worth testing whether such an approach might be able to break our current counterproductive deadlock.

David Garman, a principal and managing partner at Decker Garman Sullivan LLC, served as undersecretary in the Department of Energy in the George W. Bush administration. Kerry Emanuel is the Cecil and Ida Green Professor of atmospheric science at the Massachusetts Institute of Technology and codirector of MIT’s Lorenz Center, a climate think tank devoted to basic curiosity-driven climate research. Bruce Phillips is a director of The NorthBridge Group, an economic and strategic consulting firm.

Books

What’s My (Cell) Line?

Cloning Wildlife: Zoos, Captivity, and the Future of Endangered Animals

by Carrie Friese. New York: New York University Press, 2013, 258 pp.

Stewart Brand

What a strange and useful book this is!

It looks like much ado about not much—just three experiments conducted at zoos on cross-species cloning (in banteng, gaur, and African wild-cat). Yet the much-ado is warranted, given the rapid arrival of biotech tools and techniques that may revolutionize conservation with the prospect of precisely targeted genetic rescue for endangered and even extinct species. Carrie Friese’s research was completed before “de-extinction” was declared plausible in 2013, but her analysis applies directly.

First, a note: readers of this review should be aware of two perspectives at work. Friese writes as a sociologist, so expect occasional sentences such as, “Cloned animals are not objects here…. They are ‘figures’ in [Donna] Haraway’s sense of the word, in that they embody ‘material-semiotic nodes or knots in which diverse bodies and meanings coshape one another.’” I write as a proponent of high-tech genetic rescue, being a co-founder of Revive & Restore, a small nonprofit pushing ahead with de-extinction for woolly mammoths and passenger pigeons and with genetic assistance for potentially inbred black-footed ferrets. I’m also the author of a book on ecopragmatism, called Whole Earth Discipline, that Friese quotes approvingly.

Friese is a sharp-eyed researcher. She begins by noting with interest that “in direct contradiction to public enthusiasm surrounding endangered animal cloning, many people in zoos have been rather ambivalent about such technological developments.” Dissecting ambivalence is her joy, I think, because she detects in it revealing indicators of deep debate and the hidden processes by which professions change their mind fundamentally, driven by technological innovation.

The innovation in this case concerns the ability, new in this century, of going beyond same-species cloning (such as with Dolly the sheep) to cross-species cloning. An egg from one species, such as a domestic cow, has its nucleus removed and replaced with the nucleus and nuclear DNA of an endangered species, such as the Javan banteng, a type of wild cow found in Southeast Asia. The egg is grown in vitro to an early-stage embryo and then implanted in the uterus of a cow. When all goes well (it sometimes doesn’t), the pregnancy goes to term, and a new Javan banteng is born. In the case of the banteng, its DNA was drawn from tissue cryopreserved 25 years earlier by San Diego’s Frozen Zoo, in the hope that it could help restore genetic variability to the remaining population of bantengs assumed to be suffering from progressive inbreeding. (At Revive & Restore we are doing something similar with black-footed ferret DNA from the Frozen Zoo.)

91

Now comes the ambivalence. The cloned “banteng” may have the nuclear DNA of a banteng, but its mitochondrial DNA (a lesser but still critical genetic component found outside of the nucleus and passed on only maternally) comes from the egg of a cow. Does that matter? It sure does to zoos, which see their task as maintaining genetically pure species. Zoos treat cloned males, which can pass along only nuclear DNA to future generations, as valuable “bridges” of pure banteng DNA to the banteng gene pool. But cloned female bantengs, with their baggage of cow mitochondrial DNA ready to be passed to their offspring, are deemed valueless hybrids.

Friese describes this view as “genetic essentialism.” It is a byproduct of the “conservation turn” that zoos took in the 1970s. In this shift, zoos replaced their old cages with immersion displays of a variety of animals looking somewhat as if they were in the wild, and they also took on a newly assumed role as repositories of wildlife gene pools to supplant or enrich, if necessary, populations that are threatened in the wild. (The conservation turn not only saved zoos; it pushed them to new levels of popularity. In the United States, 100 million people a year now visit zoos, wildlife parks, and aquariums.)

But in the 1980s some conservation biologists began moving away from focusing just on species to an expanded concern about whole ecosystems and thus about ecological function. They became somewhat relaxed about species purity. When peregrine falcons died out along the East Coast of the United States, conservationists replaced them with hybrid falcons from elsewhere, and the birds thrived. Inbred Florida panthers were saved with an infusion of DNA from Texas cougars. Coyotes, on their travels from west to east, have been picking up wolf genes, and the wolves have been hybridizing with dogs.

As the costs of DNA sequencing keep coming down, field biologists have been discovering that hybridization is rampant in nature and indeed may be one of the principle mechanisms of evolution, which is said to be speeding up in these turbulent decades. Friese notes that “as an institution, the zoo is particularly concerned with patrolling the boundaries between nature and culture.” Defending against cloned hybridization, they think, is defending nature from culture. But if hybridization is common in nature, then what?

Soon enough, zoos will be confronting the temptation of de-extincted woolly mammoths (and passenger pigeons, great auks, and Carolina parakeets, among others). Those thrilling animals could be huge draws, deeply educational, exemplars of new possibilities for conservation. They will also be, to a varying extent, genomic hybrids—mammoths that are partly Asian elephant, passenger pigeons that are partly band-tailed pigeon, great auks that are partly razorbill, Carolina parakeets that are partly sun parakeet. Should we applaud or turn away in dismay? I think that conservation biologists will look for one primary measure of success: Can the revived animals take up their old ecological role and manage on their own in the wild? If not, they are freaks. If they succeed, welcome back.

Friese has written a valuable chronicle of the interaction of wildlife conservation, zoos, and biotech in the first decade of this century. It is a story whose developments are likely to keep surprising us for at least the rest of this century, and she loves that. Her book ends: “Humans should learn to respond well to the surprises that cloned animals create.”

Stewart Brand (sb@longnow.org) is the president of the Long Now Foundation in Sausalito, California.

Climate perceptions

Reason in a Dark Time: Why the Struggle against Climate Change Failed—and What It Means for Our Future

by Dale Jamieson. Oxford University Press, New York, 260 pp.

Elizabeth L. Malone

Did climate change cause Hurricanes Katrina and Sandy? Does a cold, snowy winter disprove climate change? As Dale Jamieson says in Reason in a Dark Time, “These are bad questions and no answer can be given that is not misleading. It is like asking whether when a baseball player gets a base hit, it is caused by his .350 batting average. One cannot say ‘yes,’ but saying ‘no’ falsely suggests that there is no relationship between his batting average and the base hit.” Analogies such as this are a major strength of this book, which both distills and extends the thoughtful analysis that Jamieson has been providing for well over two decades.

I’ve been following Jamieson’s work since the early 1990s, when a group at Pacific Northwest National Laboratory began to assess the social science literature relevant to climate change. Few scholars outside the physical sciences had addressed climate change explicitly; Jamieson, a philosopher, had. His publications on ethics, moral issues, uncertainty, and public policy laid down important arguments captured in Human Choice and Climate Change, which I co-edited with Steve Rayner in 1998. And the arguments are still current and vitally important as society contemplates the failure of all first-best solutions regarding climate change: an effective global agreement to reduce greenhouse gas emissions, vigorous national policies, adequate transfers of technology and other resources from industrialized to less-industrialized countries, and economic efficiency, among others.

In Reason in a Dark Time, Jamieson works steadfastly through the issues. He lays out the larger picture with energy and clarity. He takes us back to the beginning, with the history of scientific discoveries about the greenhouse effect and its emergence as a policy concern through the 1992 Earth Summit’s spirit of high hopefulness and the gradual unraveling of those high hopes by the time of the 2009 Copenhagen Climate Change Conference. He discusses obstacles to action, from scientific ignorance to organized denial to the limitations of our perceptions and abilities in responding to “the hardest problem.” He details two prominent but inadequate approaches to both characterizing the problem of climate change and prescribing solutions: economics and ethics. And finally, he discusses doable and appropriate responses in this “dark world” that has so far failed to agree on and implement effective actions that adequately reflect the scope of the problem.

Well, you may say, we’ve seen this book before. There are lots of books (and articles, both scholarly and mainstream) that give the history, discuss obstacles, criticize the ways the world has been trying to deal with climate change, and give recommendations. And indeed, Jamieson himself draws on his own lengthy publication record.

But you should read this book for its insights. If you are already knowledgeable about the history of climate science and international negotiations, you might skim this discussion. (It’s a good history, though.) All readers will gain from examining the useful and clear distinctions that Jamieson draws regarding climate skepticism, contrarianism, and denialism. Put simply, he sees that “healthy skepticism” questions evidence and views while not denying them; contrarianism may assert outlandish views but is skeptical of all views, including its own outlandish assertions; and denialism quite simply rejects a widely believed and well-supported claim and tries to explain away the evidence for the claim on the basis of conspiracy, deceit, or some rhetorical appeal to “junk science.” And take a look at the table and related text that depict a useful typology of eight frames of science-related issues that relate to climate change: social progress, economic development and competitiveness, morality and ethics, scientific and technical uncertainty, Pandora’s box/ Frankenstein’s monster/runaway science, public accountability and governance, middle way/alternative path, and conflict and strategy.

93

Jamieson’s discussions of the “limits of economics” and the “frontiers of ethics” are also useful. Though they tread much-traveled ground, they take a slightly different slant, starting not with the forecast but the reality of climate change. For instance, the discount rate (how economics values costs in the future) has been the subject of endless critiques, but typically with the goal of coming up with the “right” rate. But Jamieson points out that this is a fruitless endeavor, as social values underlie arguments for almost any discount rate. Thus, the discount rate (and other economic tools) is simply inadequate and, moreover, a mere standin for the real discussion about how society should plan for the future.

Similarly, his discussion of ethics points out that “commonsense morality” cannot “provide ethical guidance with some important aspects of climate-changing behavior”—so it’s not surprising that society has failed to act on climate change. The basis for action is not a matter of choosing appropriate values from some eternal ethical and moral menu, but of evolving values that will be relevant to a climate-changed world in which we make choices about how to adapt to climate change and whether to prevent further climate change—oh, and about whether or not to dabble in planet-altering geoengineering. Ethical and moral revolutions have occurred (e.g., capitalism’s elevation of selfishness), and climate ethicists are breaking new ground in connecting and moralizing about emissions-producing activities and climate change.

Although Jamieson’s explorations do not provide an antidote to the gloom of our dark time, readers will find much to think about here.

He clearly rebuts the argument, for example, that individual actions do not matter, asserting that “What we do matters because of its effects on the world, but what we do also matters because of its effects on ourselves.” Expanding on this thought, he says: “In my view we find meaning in our lives in the context of our relationships to humans, other animals, the rest of nature, and the world generally. This involves balancing such goods as self-expression, responsibility to others, joyfulness, commitment, attunement to reality and openness to new (often revelatory) experiences. What this comes to in the conduct of daily life is the priority of process over product, the journey over the destination, and the doing over what is done.” To my mind, this sounds like the good life that includes respect for nature, temperance, mindfulness, and cooperativeness.

Ultimately, Jamieson turns to politics and policy. As the terms prevention, mitigation, adaptation, and geoengineering have become fuzzy at best, he proposes a new classification of responses to climate change: adaptation (to reduce the negative effects of climate change), abatement (to reduce greenhouse gas emissions), mitigation (to reduce concentrations of greenhouse gases in the atmosphere), and solar radiation management (to alter the Earth’s energy balance). I agree with Jamieson that we need all of the first three and also that we need to be very cautious about “the category formerly known as geoengineering.”

Most of all, we need to live in the world as is, with all its diversity of motives and potential actions, not the dream world imagined at the Earth Summit held in 1992 in Rio de Janeiro. Jamieson gives us seven practical priorities for action (yes, they’ve been said before, but not often in the real-world context that he sketches). And he offers three guiding principles (my favorite is “stop arguing about what is optimal and instead focus on doing what is good,” with “good” encompassing both practical and ethical elements).

I do have some quarrels with the book, starting with the title. In its fullest form, it is unnecessarily wordy and gloomy. And as Jamieson does not talk much of “reason” in the book (nor is there even a definition of the contested term that I could find), why is it displayed so prominently?

More substantively, the gloom that Jamieson portrays is sometimes reinforced by statements that seem almost apocalyptic, such as, “While once particular human societies had the power to upset the natural processes that made their lives and cultures possible, now people have the power to alter the fundamental global conditions that permitted human life to evolve and that continue to sustain it. There is little reason to suppose that our systems of governance are up to the tasks of managing such threats.” But people have historically faced threats (war, disease, overpopulation, the Little Ice Age, among others) that likely seemed to them just as serious, so statements such as Jamieson’s invite the backlash that asserts, well, here we still are and better off, too.

Then there is the question of the intended audience, which Jamieson specifies as “my fellow citizens and…those with whom I have discussed these topics over the years.” But the literature reviews and the heavy use of citations seem to target a narrower academic audience. I would hope that people involved in policymaking and other decisionmaking would not be put off by the academic trappings, but I have my doubts.

If the book finds a wide audience, our global conversation about climate change could become more fruitful. Those who do read it will be rewarded with much to think about in the insights, analogies, and accessible discussions of productive pathways into the climate-changed future.

Elizabeth L. Malone is a staff scientist at the Joint Climate Change Research Institute, a project sponsored by Pacific Northwest National Laboratory and the University of Maryland.

Final Frontier vs. Fruitful Frontier: The Case for Increasing Ocean Exploration

AMITAI ETZIONI

Possible solutions to the world’s energy, food, environmental, and other problems are far more likely to be found in nearby oceans than in distant space.

Every year, the federal budget process begins with a White House-issued budget request, which lays out spending priorities for federal programs. From this moment forward, President Obama and his successors should use this opportunity to correct a longstanding misalignment of federal research priorities: excessive spending on space exploration and neglect of ocean studies. The nation should begin transforming the National Oceanic and Atmospheric Administration (NOAA) into a greatly reconstructed, independent, and effective federal agency. In the present fiscal climate of zero-sum budgeting, the additional funding necessary for this agency should be taken from the National Aeronautics and Space Administration (NASA).

The basic reason is that deep space—NASA’s favorite turf—is a distant, hostile, and barren place, the study of which yields few major discoveries and an abundance of overhyped claims. By contrast, the oceans are nearby, and their study is a potential source of discoveries that could prove helpful for addressing a wide range of national concerns from climate change to disease; for reducing energy, mineral, and potable water shortages; for strengthening industry, security, and defenses against natural disasters such as hurricanes and tsunamis; for increasing our knowledge about geological history; and much more. Nevertheless, the funding allocated for NASA in the Consolidated and Further Continuing Appropriations Act for FY 2013 was 3.5 times higher than that allocated for NOAA. Whatever can be said on behalf of a trip to Mars or recent aspirations to revisit the Moon, the same holds many times over for exploring the oceans; some illustrative examples follow. (I stand by my record: In The Moondoggle, published in 1964, I predicted that there was less to be gained in deep space than in near space—the sphere in which communication, navigations, weather, and reconnaissance satellites orbit—and argued for unmanned exploration vehicles and for investment on our planet instead of the Moon.)

Climate

There is wide consensus in the international scientific community that the Earth is warming; that the net effects of this warming are highly negative; and that the main cause of this warming is human actions, among which carbon dioxide emissions play a key role. Hence, curbing these CO2 emissions or mitigating their effects is a major way to avert climate change.

Space exploration advocates are quick to claim that space might solve such problems on Earth. In some ways, they are correct; NASA does make helpful contributions to climate science by way of its monitoring programs, which measure the atmospheric concentrations and emissions of greenhouse gases and a variety of other key variables on the Earth and in the atmosphere. However, there seem to be no viable solutions to climate change that involve space.

By contrast, it is already clear that the oceans offer a plethora of viable solutions to the Earth’s most pressing troubles. For example, scientists have already demonstrated that the oceans serve as a “carbon sink.” The oceans have absorbed almost one-third of anthropogenic CO2 emitted since the advent of the industrial revolution and have the potential to continue absorbing a large share of the CO2 released into the atmosphere. Researchers are exploring a variety of chemical, biological, and physical geoengineering projects to increase the ocean’s capacity to absorb carbon. Additional federal funds should be allotted to determine the feasibility and safety of these projects and then to develop and implement any that are found acceptable.

Iron fertilization or “seeding” of the oceans is perhaps the most well-known of these projects. Just as CO2 is used by plants during photosynthesis, CO2 dissolved in the oceans is absorbed and similarly used by autotrophic algae and other phytoplankton. The process “traps” the carbon in the phytoplankton; when the organism dies, it sinks to the sea floor, sequestering the carbon in the biogenic “ooze” that covers large swaths of the seafloor. However, many areas of the ocean high in the nutrients and sunlight necessary for phytoplankton to thrive lack a mineral vital to the phytoplankton’s survival: iron. Adding iron to the ocean has been shown to trigger phytoplankton blooms, and thus iron fertilization might increase the CO2 that phytoplankton will absorb. Studies note that the location and species of phytoplankton are poorly understood variables that affect the efficiency with which iron fertilization leads to the sequestration of CO2. In other words, the efficiency of iron fertilization could be improved with additional research. Proponents of exploring this option estimate that it could enable us to sequester CO2 at a cost of between $2 and $30/ton—far less than the cost of scrubbing CO2 directly from the air or from power plant smokestacks—$1,000/ton and $50-100/ton, respectively, according to one Stanford study.

Justine Serebrin

Growing up on the Southern California coast, Justine Serebrin spent countless hours snorkeling. From an early age she sensed that the ocean was in trouble as she noticed debris, trash, and decaying marine life consuming the shore. She credits her childhood experiences with influencing her artistic imagination and giving her a feeling of connectedness and lifelong love of the ocean.

Serebrin’s close observations of underwater landscapes inform her paintings, which are based upon what she describes as the “deep power” of the ocean. She has traveled to the beaches of Spain, Mexico, Hawaii, the Caribbean, and the western and eastern coasts of the United States. The variety of creatures, cleanliness, temperature and emotion evoked from each location greatly influence her artwork. She creates the paintings above water, but is exploring the possibility of painting underwater in the future. Her goal with this project is to promote ocean awareness and stewardship.

Serebrin is currently working on The Illuminated Water Project which will enable her to increase the scope and impact of her work. Her paintings have been exhibited at the Masur Museum of Art, Monroe, Louisiana; the New Orleans Museum of Art, Lousiana; and the McNay Museum of Art, San Antonio, Texas. She is a member of the Surfrider Foundation and the Ocean Artists Society. She is the co-founder of The Upper Six Hundreds Artist Collective, comprised of artists, designers, musicians, writers, and many others who are working together to redefine the conventions of the traditional art gallery through an integration of creative practice and community engagement. She holds a BFA from Otis College of Art and Design, Los Angeles. Visit her website at http://www.justineserebrin.com/

Alana Quinn

67

JUSTINE SEREBRIN, Soul of The Sea, Oil on translucent paper, 25 × 40 inches, 2013.

Despite these promising findings, there are a number of challenges that prevent us from using the oceans as a major means of combating climate change. First, ocean “sinks” have already absorbed an enormous amount of CO2. It is not known how much more the oceans can actually absorb, because ocean warming seems to be altering the absorptive capacity of the oceans in unpredictable ways. It is further largely unknown how the oceans interact with the nitrogen cycle and other relevant processes.

Second, the impact of CO2 sequestration on marine ecosystems remains underexplored. The Joint Ocean Commission Initiative, which noted in a 2013 report that absorption of CO2 is “acidifying” the oceans, recommended that “the administration and Congress should take actions to measure and assess the emerging threat of ocean acidification, better understand the complex dynamics causing and exacerbating it, work to determine its impact, and develop mechanisms to address the problem.” The Department of Energy specifically calls for greater “understanding of ocean biogeochemistry” and of the likely impact of carbon injection on ocean acidification. Since the mid-18th century, the acidity of the surface of the ocean, measured by the water’s concentration of hydrogen ions, has increased by 30% on average, with negative consequences for mollusks, other calcifying organisms, and the ecosystems they support, according to the Blue Ribbon Panel on Ocean Acidification. Different ecosystems have also been found to exhibit different levels of pH variance, with certain areas such as the California coastline experiencing higher levels of pH variability than elsewhere. The cost worldwide of mollusk-production losses alone could reach $100 billion if acidification is not countered, says Monica Contestabile, an environmental economist and editor of Nature Climate Change. Much remains to be learned about whether and how carbon sequestration methods like iron fertilization could contribute to ocean acidification; it is, however, clearly a crucial subject of study given the dangers of climate change.

Food

Ocean products, particularly fish, are a major source of food for major parts of the world. People now eat four times as much fish, on average, as they did in 1950. The world’s catch of wild fish reached an all-time high of 86.4 million tons in 1996; although it has since declined, the world’s wild marine catch remained 78.9 million tons in 2011. Fish and mollusks provide an “important source of protein for a billion of the poorest people on Earth, and about three billion people get 15 percent or more of their annual protein from the sea,” says Matthew Huelsenbeck, a marine scientist affiliated with the ocean conservation organization Oceana. Fish can be of enormous value to malnourished people because of its high levels of micronutrients such as Vitamin A, Iron, Zinc, Calcium, and healthy fats.

However, many scientists have raised concerns about the ability of wild fish stocks to survive such exploitation. The Food and Agriculture Organization of the United Nations estimated that 28% of fish stocks were overexploited worldwide and a further 3% were depleted in 2008. Other sources estimate that 30% of global fisheries are overexploited or worse. There have been at least four severe documented fishery collapses—in which an entire region’s population of a fish species is overfished to the point of being incapable of replenishing itself, leading to the species’ virtual disappearance from the area—worldwide since 1960, a report from the International Risk Governance Council found. Moreover, many present methods of fishing cause severe environmental damage; for example, the Economist reported that bottom trawling causes up to 15,400 square miles of “dead zone” daily through hypoxia caused by stirring up phosphorus and other sediments.

There are several potential approaches to dealing with overfishing. One is aquaculture. Marine fish cultivated through aquaculture is reported to cost less than other animal proteins and does not consume limited freshwater sources. Furthermore, aquaculture has been a stable source of food from 1970 to 2006; that is, it consistently expanded and was very rarely subject to unexpected shocks. From 1992 to 2006 alone, aquaculture expanded from 21.2 to 66.8 million tons of product.

68

JUSTINE SEREBRIN, Sanctuary, Oil and watercolor on translucent paper, 25 × 40 inches, 2013.

Although aquaculture is rapidly expanding—more than 60% from 2000 to 2008—and represented more than 40% of global fisheries production in 2006, a number of challenges require attention if aquaculture is to significantly improve worldwide supplies of food. First, scientists have yet to understand the impact of climate change on aquaculture and fishing. Ocean acidification is likely to damage entire ecosystems, and rising temperatures cause marine organisms to migrate away from their original territory or die off entirely. It is important to study the ways that these processes will likely play out and how their effects might be mitigated. Second, there are concerns that aquaculture may harm wild stocks of fish or the ecosystems in which they are raised through overcrowding, excess waste, or disease. This is particularly true where aquaculture is devoted to growing species alien to the region in which they are produced. Third, there are few industry standard operating practices (SOPs) for aquaculture; additional research is needed for developing these SOPs, including types and sources of feed for species cultivated through aquaculture. Finally, in order to produce a stable source of food, researchers must better understand how biodiversity plays a role in preventing the sudden collapse of fisheries and develop best practices for fishing, aquaculture, and reducing bycatch.

On the issue of food, NASA is atypically mum. It does not claim it will feed the world with whatever it finds or plans to grow on Mars, Jupiter, or any other place light years away. The oceans are likely to be of great help.

Energy

NASA and its supporters have long held that its work can help address the Earth’s energy crises. One NASA project calls for developing low-energy nuclear reactors (LENRs) that use weak nuclear force to create energy, but even NASA admits that “we’re still many years away” from large-scale commercial production. Another project envisioned orbiting space-based solar power (SBSP) that would transfer energy wirelessly to Earth. The idea was proposed in the 1960s by then-NASA scientist Peter Glaser and has since been revisited by NASA; from 1995 to 2000, NASA actively investigated the viability of SBSP. Today, the project is no longer actively funded by NASA, and SBSP remains commercially unviable due to the high cost of launching and maintaining satellites and the challenges of wirelessly transmitting energy to Earth.

69

JUSTINE SEREBRIN, Metamorphosis, Oil on translucent paper, 23.5 × 18 inches, 2013.

Marine sources of renewable energy, by contrast, rely on technology that is generally advanced; these technologies deserve additional research to make them fully commercially viable. One possible ocean renewable energy source is wave energy conversion, which uses the up-and-down motion of waves to generate electrical energy. Potentially-useable global wave power is estimated to be two terawatts, the equivalent of about 200 large power stations or about 10% of the entire world’s predicted energy demand for 2020 according to the World Ocean Review. In the United States alone, wave energy is estimated to be capable of supplying fully one-third of the country’s energy needs.

A modern wave energy conversion device was made in the 1970s and was known as the Salter’s Duck; it produced electricity at a whopping cost of almost $1/kWh. Since then, wave energy conversion has become vastly more commercially viable. A report from the Department of Energy in 2009 listed nine different designs in pre-commercial development or already installed as pilot projects around the world. As of 2013, as many as 180 companies are reported to be developing wave or tidal energy technologies; one device, the Anaconda, produces electricity at a cost of $0.24/kWh. The United States Department of Energy and the National Renewable Energy Laboratory jointly maintain a website that tracks the average cost/kWh of various energy sources; on average, ocean energy overall must cost about $0.23/kWh to be profitable. Some projects have been more successful; the prototype LIMPET wave energy conversion technology currently operating on the coast of Scotland produces wave energy at the price of $0.07/kWh. For comparison, the average consumer in the United States paid $0.12/kWh in 2011. Additional research could further reduce the costs.

Other options in earlier stages of development include using turbines to capture the energy of ocean currents. The technology is similar to that used by wind energy; water moving through a stationary turbine turns the blades, generating electricity. However, because water is so much denser than air, “for the same surface area, water moving 12 miles per hour exerts the same amount of force as a constant 110 mph wind,” says the Bureau of Ocean Energy Management (BOEM), a division of the Department of the Interior. (Another estimate from a separate BOEM report holds that a 3.5 mph current “has the kinetic energy of winds in excess of [100 mph].”) BOEM further estimates that total worldwide power potential from currents is five terawatts—about a quarter of predicted global energy demand for 2020—and that “capturing just 1/1,000th of the available energy from the Gulf Stream…would supply Florida with 35% of its electrical needs.”

Although these technologies are promising, additional research is needed not only for further development but also to adapt them to regional differences. For instance, ocean wave conversion technology is suitable only in locations in which the waves are of the same sort for which existing technologies were developed and in locations where the waves also generate enough energy to make the endeavor profitable. One study shows that thermohaline circulation—ocean circulation driven by variations in temperature and salinity—varies from area to area, and climate change is likely to alter thermohaline circulation in the future in ways that could affect the use of energy generators that rely on ocean currents. Additional research would help scientists understand how to adapt energy technologies for use in specific environments and how to avoid the potential environmental consequences of their use.

Renewable energy resources are the ocean’s particularly attractive energy product; they contribute much less than coal or natural gas to anthropogenic greenhouse gas emissions. However, it is worth noting that the oceans do hold vast reserves of untapped hydrocarbon fuels. Deep-sea drilling technologies remain immature; although it is possible to use oil rigs in waters of 8,000 to 9,000 feet, greater depths require the use of specially-designed drilling ships that still face significant challenges. Deep-water drilling that takes place in depths of more than 500 feet is the next big frontier for oil and natural-gas production, projected to expand offshore oil production by 18% by 2020. One should expect the development of new technologies that would enable drilling petroleum and natural gas at even greater depths than presently possible and under layers of salt and other barriers.

In addition to developing these technologies, entire other lines of research are needed to either mitigate the side effects of large-scale usage of these technologies or to guarantee that these effects are small. Although it has recently become possible to drill beneath Arctic ice, the technologies are largely untested. Environmentalists fear that ocean turbines could harm fish or marine mammals, and it is feared that wave conversion technologies would disturb ocean floor sediments, impede migration of ocean animals, prevent waves from clearing debris, or harm animals. Demand has pushed countries to develop technologies to drill for oil beneath ice or in the deep sea without much regard for the safety or environmental concerns associated with oil spills. At present, there is no developed method for cleaning up oil spills in the Arctic, a serious problem that requires additional research if Arctic drilling is to commence on a larger scale.

More ocean potential

When large quantities of public funds are invested in a particular research and development project, particularly when the payoff is far from assured, it is common for those responsible for the project to draw attention to the additional benefits—“spinoffs”—generated by the project as a means of adding to its allure. This is particularly true if the project can be shown to improve human health. Thus, NASA has claimed that its space exploration “benefit[ted] pharmaceutical drug development” and assisted in developing a new type of sensor “that provides real-time image recognition capabilities,” that it developed an optics technology in the 1970s that now is used to screen children for vision problems, and that a type of software developed for vibration analysis on the Space Shuttle is now used to “diagnose medical issues.” Similarly, opportunities to identify the “components of the organisms that facilitate increased virulence in space” could in theory—NASA claims—be used on Earth to “pinpoint targets for anti-microbial therapeutics.”

Ocean research, as modest as it is, has already yielded several medical “spinoffs.” The discovery of one species of Japanese black sponge, which produces a substance that successfully blocks division of tumorous cells, led researchers to develop a late-stage breast cancer drug. An expedition near the Bahamas led to the discovery of a bacterium that produces substances that are in the process of being synthesized as antibiotics and anticancer compounds. In addition to the aforementioned cancer fighting compounds, chemicals that combat neuropathic pain, treat asthma and inflammation, and reduce skin irritation have been isolated from marine organisms. One Arctic Sea organism alone produced three antibiotics. Although none of the three ultimately proved pharmaceutically significant, current concerns that strains of bacteria are developing resistance to the “antibiotics of last resort” is a strong reason to increase funding for bioprospecting. Additionally, the blood cells of horseshoe crabs contain a chemical—which is found nowhere else in nature and so far has yet to be synthesized—that can detect bacterial contamination in pharmaceuticals and on the surfaces of surgical implants. Some research indicates that between 10 and 30 percent of horseshoe crabs that have been bled die, and that those that survive are less likely to mate. It would serve for research to indicate the ways these creatures can be better protected. Up to two-thirds of all marine life remains unidentified, with 226,000 eukaryotic species already identified and more than 2,000 species discovered every year, according to Ward Appeltans, a marine biologist at the Intergovernmental Oceanographic Commission of UNESCO.

Contrast these discoveries of new species in the oceans with the frequent claims that space exploration will lead to the discovery of extraterrestrial life. For example, in 2010 NASA announced that it had made discoveries on Mars “that [would] impact the search for evidence of extraterrestrial life” but ultimately admitted that they had “no definitive detection of Martian organics.” The discovery that prompted the initial press release—that NASA had discovered a possible arsenic pathway in metabolism and that thus life was theoretically possible under conditions different than those on Earth—was then thoroughly rebutted by a panel of NASA-selected experts. The comparison with ocean science is especially stark when one considers that oceanographers have already discovered real organisms that rely on chemosyn-thesis—the process of making glucose from water and carbon dioxide by using the energy stored in chemical bonds of inorganic compounds—living near deep sea vents at the bottom of the oceans.

The same is true of the search for mineral resources. NASA talks about the potential for asteroid mining, but it will be far easier to find and recover minerals suspended in ocean waters or beneath the ocean floor. Indeed, resources beneath the ocean floor are already being commercially exploited, whereas there is not a near-term likelihood of commercial asteroid mining.

71

JUSTINE SEREBRIN, Jellyfish Love, Oil on translucent paper, 11 × 14 inches, 2013.

72

JUSTINE SEREBRIN, Scarab, Digital painting, 40 × 25 inches, 2013.

Another major justification cited by advocates for the pricey missions to Mars and beyond is that “we don’t know” enough about the other planets and the universe in which we live. However, the same can be said of the deep oceans. Actually, we know much more about the Moon and even about Mars than we know about the oceans. Maps of the Moon are already strikingly accurate, and even amateur hobbyists have crafted highly detailed pictures of the Moon—minus the “dark side”—as one set of documents from University College London’s archives seems to demonstrate. By 1967, maps and globes depicting the complete lunar surface were produced. By contrast, about 90% of the world’s oceans had not yet been mapped as of 2005. Furthermore, for years scientists have been fascinated by noises originating at the bottom of the ocean, known creatively as “the Bloop” and “Julia,” among others. And the world’s largest known “waterfall” can be found entirely underwater between Greenland and Iceland, where cold, dense Arctic water from the Greenland Sea drops more than 11,500 feet before reaching the seafloor of the Denmark Strait. Much remains poorly understood about these phenomena, their relevance to the surrounding ecosystem, and the ways in which climate change will affect their continued existence.

In short, there is much that humans have yet to understand about the depths of the oceans, further research into which could yield important insights about Earth’s geological history and the evolution of humans and society. Addressing these questions surpasses the importance of another Mars rover or a space observatory designed to answer highly specific questions of importance mainly to a few dedicated astrophysicists, planetary scientists, and select colleagues.

Leave the people at home

NASA has long favored human exploration, despite the fact that robots have become much more technologically advanced and that their (one-way) travel poses much lower costs and next to no risks compared to human missions. Still, the promotion of human missions continues; in December 2013, NASA announced that it would grow basil, turnips, and Arabidopsis on the Moon to “show that crop plants that ultimately will feed astronauts and moon colonists and all, are also able to grow on the moon.” However, Martin Rees, a professor of cosmology and astrophysics at Cambridge University and a former president of the Royal Society, calls human spaceflight a “waste of money,” pointing out that “the practical case [for human spaceflight] gets weaker and weaker with every advance in robotics and miniaturisation.” Another observer notes that “it is in fact a universal principle of space science—a ‘prime directive,’ as it were—that anything a human being does up there could be done by unmanned machinery for one-thousandth the cost.” The cost of sending humans to Mars is estimated at more than $150 billion. The preference for human missions persists nonetheless, primarily because NASA believes that human spaceflight is more impressive and will garner more public support and taxpayer dollars, despite the fact that most of NASA’s scientific yield to date, Rees shows, has come from the Hubble Space Telescope, the Chandra X-Ray Observatory, the Kepler space observatory, space rovers, and other missions. NASA relentlessly hypes the bravery of the astronauts and the pioneering aspirations of all humanity despite a lack of evidence that these missions engender any more than a brief high for some.

Ocean exploration faces similar temptations. There have been some calls for “aquanauts,” who would explore the ocean much as astronauts explore space, and for the prioritization of human exploration missions. However, relying largely robots and remote-controlled submersibles seems much more economical, nearly as effective at investigating the oceans’ biodiversity, chemistry, and seafloor topography, and endlessly safer than human agents. In short, it is no more reasonable to send aquanauts to explore the seafloor than it is to send astronauts to explore the surface of Mars.

Several space enthusiasts are seriously talking about creating human colonies on the Moon or, eventually, on Mars. In the 1970s, for example, NASA’s Ames Research Center spent tax dollars to design several models of space colonies meant to hold 10,000 people each. Other advocates have suggested that it might be possible to “terra-form” the surface of Mars or other planets to resemble that of Earth by altering the atmospheric conditions, warming the planet, and activating a water cycle. Other space advocates envision using space elevators to ferry large numbers of people and supplies into space in the event of a catastrophic asteroid hitting the Earth. Ocean enthusiasts dream of underwater cities to deal with overpopulation and “natural or man-made disasters that render land-based human life impossible.” The Seasteading Institute, Crescent Hydropolis Resorts, and the League of New Worlds have developed pilot projects to explore the prospect of housing people and scientists under the surface of the ocean. However, these projects are prohibitively expensive and “you can never sever [the surface-water connection] completely,” says Dennis Chamberland, director of one of the groups. NOAA also invested funding in a habitat called Aquarius built in 1986 by the Navy, although it has since abandoned this project.

If anyone wants to use their private funds for such outlier projects, they surely should be free to proceed. However, for public funds, priorities must be set. Much greater emphasis must be placed on preventing global calamities rather than on developing improbable means of housing and saving a few hundred or thousand people by sending them far into space or deep beneath the waves.

Reimagining NOAA

These select illustrative examples should suffice to demonstrate the great promise of intensified ocean research, a heretofore unrealized promise. However, it is far from enough to inject additional funding, which can be taken from NASA if the total federal R&D budget cannot be increased, into ocean science. There must also be an agency with a mandate to envision and lead federal efforts to bolster ocean research and exploration the way that President Kennedy and NASA once led space research and “captured” the Moon.

For those who are interested in elaborate reports on the deficiencies of existing federal agencies’ attempts to coordinate this research, the Joint Ocean Commission Initiative (JOCI)—the foremost ocean policy group in the United States and the product of the Pew Oceans Commission and the United States Commission on Ocean Policy—provides excellent overviews. These studies and others reflect the tug-of-war that exists among various interest groups and social values. Environmentalists and those concerned about global climate change, the destruction of ocean ecosystems, declines in biodiversity, overfishing, and oil spills clash with commercial groups and states more interested in extracting natural resources from the oceans, in harvesting fish, and utilizing the oceans for tourism. (One observer noted that only 1% of the 139.5 million square miles of the ocean is conserved through formal protections, whereas billons use the oceans “as a ‘supermarket and a sewer.’”) And although these reports illuminate some of the challenges that must be surmounted if the government is to institute a broad, well-funded set of ocean research goals, none of these groups have added significant funds to ocean research, nor have they taken steps to provide NASA-like agency to take the lead in federally-supported ocean science.

NOAA is the obvious candidate, but it has been hampered by a lack of central authority and by the existence of many disparate programs, each of which has its own small group of congressional supporters with parochial interests. The result is that NOAA has many supporters of its distinct little segments but too few supporters of its broad mission. Furthermore, Congress micromanages NOAA’s budget, leaving too little flexibility for the agency to coordinate activities and act on its own priorities.

It is hard to imagine the difficulty of pulling these pieces together—let alone consolidating the bewildering number of projects—under the best of circumstances. Several administrators of NOAA have made significant strides in this regard and should be recognized for their work. However, Congress has saddled the agency with more than 100 ocean-related laws that require the agency to promote what are often narrow and competing interests. Moreover, NOAA is buried in the Department of Commerce, which itself is considered to be one of the weaker cabinet agencies. For this reason, some have suggested that it would be prudent to move NOAA into the Department of the Interior—which already includes the United States Geological Service, the Bureau of Ocean Energy Management, the National Park Service, the U.S. Fish and Wildlife Service, and the Bureau of Safety and Environmental Enforcement—to give NOAA more of a backbone.

Moreover, NOAA is not the only federal agency that deals with the oceans. There are presently ocean-relevant programs in more than 20 federal agencies—including NASA. For instance, the ocean exploration program that investigates deep ocean currents by using satellite technology to measure minute differences in elevation on the surface of the ocean is currently controlled by NASA, and much basic ocean science research has historically been supported by the Navy, which lost much of its interest in the subject since the end of the Cold War. (The Navy does continue to fund some ocean research, but at levels much lower than earlier.) Many of these programs should be consolidated into a Department of Ocean Research and Exploration that would have the authority to do what NOAA has been prevented from doing: namely, direct a well-planned and coordinated ocean research program. Although the National Ocean Council’s interagency coordinating structure is a step in the right direction, it would be much more effective to consolidate authority for managing ocean science research under a new independent agency or a reimagined and strengthened NOAA.

Setting priorities for research and exploration is always needed, but this is especially true in the present age of tight budgets. It is clear that oceans are a little-studied but very promising area for much enhanced exploration. By contrast, NASA’s projects, especially those dedicated to further exploring deep space and to manned missions and stellar colonies, can readily be cut. More than moving a few billion dollars from the faraway planets to the nearby oceans is called for, however. The United States needs an agency that can spearhead a major drive to explore the oceans—an agency that has yet to be envisioned and created.

Amitai Etzioni (etzioni@gwu.edu) is University Professor and professor of International Affairs and director of the Institute for Communitarian Policy Studies at George Washington University.

Collective Forgetting: Inside the Smithsonian’s Curatorial Crisis

ALLISON MARSH

WITH LIZZIE WADE

Federal budget cutting is undermining the value of the museums’ invaluable collections by reducing funds for maintenance, cataloging, acquisition, and access.

As the hands on my watch hit 11 o’clock, I was still fighting with the stubborn dust clinging to my chocolate-colored pants. The dust was winning. I knew it was unlikely the senator would actually show up for the tour, but I wanted to look presentable, just in case: for months, I had been writing my speech in my head. Now, maybe I would get to say some of it out loud to someone who might be in a position to help.

Standing in the doorway of the reference room of the Division of Work and Industry, beyond the reach of the general public, on the top floor of the National Museum of American History in Washington, D.C., I mentally rehearsed my pitch. As my watch clicked past 11:05, my welcoming smile started to fade. By 11:10, it was gone completely. Noticing my mounting frustration at the missed opportunity, Ailyn, a Smithsonian behind-the-scenes volunteer, looked up from her paperwork, stretched, and ran a hand through her cinnamon hair. She flashed me a sympathetic look. “I hate VIPs,” she said with her crisp Cuban accent. “They never show up on time.”

Five minutes later, I finally heard high heels clicking down the linoleum hallway. I plastered my smile back into place as the museum director’s assistant ushered our guests through the door. I tried to hide my disappointment when I realized that Martin Heinrich, the junior senator from New Mexico and the VIP I had been waiting for, would not be joining us after all. In a whirlwind of handshakes, I tried to catch names and identifying reference points. I already knew that the gentleman was an architect who had the ear of the museum’s director, but who were the two older women? I didn’t know and would never find out, but they seemed curious and eager nonetheless, so I started my tour.

In preparation for the VIPs’ visit, I’d removed some of my favorite objects from their crowded storage cases in the locked area adjacent to the more welcoming and slightly less cluttered reference room. I’d carefully arranged the artifacts on two government-issued 1960s green metal tables for inspection, but before I could don my purple nitrile gloves and make my introductions, the architect made a beeline for a set of blueprints in a soft leather binding. He smiled as he recognized the Empire State Building. Floor by floor, the names of the business tenants—some of them still familiar, but many more long forgotten—revealed themselves as he flipped through the pages. I had playfully left the book open to the forty-ninth floor, where the Allison Manufacturing Company had occupied the northeast corner of the building. Maybe it would help him remember my name.

“Do you know who did these drawings?” he asked me.

“Morris Jacks, a consulting engineer,” I offered. “He made them in 1968 as part of a tax assessment. He calculated the building’s steel to have a lifetime of sixty years.”

All three visitors quickly turned to me with worried looks. The Empire State Building was built in the 1930s. Was it going to collapse?

“Don’t worry. The steel itself is fine!” I assured them. “The Empire State Building isn’t going to fall down. Jacks was actually estimating the building’s social lifespan. He thought that after sixty years, New York City would need to replace it with something more useful.” As the visitors laughed in disbelief, I sensed an opening to start my pitch, to make the argument I’d been waiting six months to make. “Engineering drives change in America, and history can show the social implications of those technological choices. Museums have the power to—”

“Wait. What’s this?” One of the women was pointing to a patent model occupying the center of the table. She bent down to get a closer look at the three brass fans mounted in a row on a block of rich brown mahogany. They were miniature windmills, only six inches tall. Their blades were so chunky that it was hard to see the spaces where the wind would have threaded through them—a far cry from the sleek, razor-thin arms whipping around by the hundreds on today’s wind farms. But follow the wires jutting out of their backs, and you could see what made this patent model special: a battery pack. Inventor and entrepreneur Moses G. Farmer was proposing a method to store power generated by the wind. If the model were full-sized, that battery could power your house.

“It’s a way to keep the lights on even when the wind isn’t blowing,” I explained, hoping she might be in a position to report back to Senator Heinrich, who I knew served on the Senate Committee on Energy and Natural Resources. As the only engineer in the Senate, he would understand the technical difficulties of an intermittent power supply—a major hurdle in the clean energy industry. When I told the woman that Farmer was the first person to submit a patent application trying to solve the problem—in 1880—her jaw dropped.

Engineering is just one of these collections. Formally established in 1931, the collection predates the 1964 founding of the National Museum of History and Technology (the precursor to today’s NMAH). But today, its objects are routinely overlooked when the remaining curators plan exhibits.

One of the guests spotted our intern, Addie, quietly working at another desk tucked in a corner of the reference room, having been temporarily displaced by the impromptu tour. The group gathered around her cramped desk, marveling at her minute penmanship as she diligently numbered the delicate edges of mid-twentieth-century teacups arranged in neat rows on white foam board. No one had difficulty imagining well-dressed ladies sipping from them aboard a Cunard Line cruise across the Atlantic.

In our remaining time, my tour group barely let me get a word in edgewise—a few details about any object started them chattering excitedly to each other. Before I could lead the party into the back storerooms, where the real treasures were kept, the museum director himself arrived, commandeered the group, and swept them out the door with a practiced authority and a mandate to keep them on schedule. My half-hour tour of NMAH’s mechanical and civil engineering collection had been slashed to fifteen minutes. Their voices drifted as they walked down the hall. “What an amazingly intelligent staff you have,” one of the guests remarked.

The compliment was bittersweet, at best. None of us was actually on staff at NMAH. Ailyn was a volunteer who gave her time in retirement to help the Division of Work and Industry keep its records in order. Addie had only two more weeks left in her unpaid internship; she had recently graduated with an undergraduate history degree and was looking for a permanent job. I was a research fellow on a temporary leave from my faculty position in the history department at the University of South Carolina, a job to which I’d be returning in another month. That was what I had been dying to tell them: the NMAH engineering collection doesn’t have a curator, and it won’t be able to get one without help from powerful supporters, like a senator who could speak up for engineers. My guests had been so engaged with what they were seeing that I didn’t even have a chance to tell them that they were catching a rare glimpse into a collection at risk.

Since its founding in 1846, the Smithsonian Institution has served as the United States’ premier showplace of wonder. Each year, more than 30 million visitors pass through its twenty-nine outposts, delighting at Dorothy’s ruby slippers, marveling at the lunar command module, and paying their respects to the original star-spangled banner. Another 140 million people visited the Smithsonian on the Web last year alone (and not just for the National Zoo’s panda cam). Collectively, the Smithsonian preserves more than 137 million objects, specimens, and pieces of art. It’s the largest museum and research complex in the world.

But behind the brilliant display cases, the infrastructure is starting to crack. The kind of harm being done to the Smithsonian’s collections is not the quick devastation of a natural disaster, nor the malicious injury of intentional abandonment. Rather, it’s a gradual decay resulting from decades of incremental decisions by directors, curators, and collections managers, each almost imperceptibly compounding the last. Over time, shifting priorities, stretched budgets, and debates about the purpose of the museum have resulted in fewer curators and neglected collections.

In 1992, NMAH employed forty-six curators. Twenty years later, it has only twenty-one. Frustrated, overworked, and tasked with the management of objects that fall far beyond their areas of considerable expertise, the remaining curators can keep their heads above water only by ignoring the collections they don’t know much about. These collections become orphans, pushed deeper into back rooms and storage spaces. Cut off from public view and neglected by the otherwise occupied staff, these orphaned collections go into a state of suspended animation, frozen in time the day they were forgotten. With no one to advocate for them, their institutional voices fade away.

Engineering is just one of these collections. Formally established in 1931, the collection predates the 1964 founding of the National Museum of History and Technology (the precursor to today’s NMAH). But today, its objects are routinely overlooked when the remaining curators plan exhibits. If you wanted to tour the collection, you’d have to be a senator-level VIP or have friends on the inside. Even established scholars have trouble making research-related appointments. There’s simply no one available to escort them into the collections. What’s more, no one is actively adding to the collection, leaving vast swaths of the late twentieth and early twenty-first centuries—inarguably, an essential time for engineering and technology—unrepresented in the United States’ foremost museum of history.

It is difficult to trace the curatorial crisis back to a single origin. The Smithsonian’s budget is labyrinthine: federal money funds much of the institution’s infrastructure and permanent staff positions, while private donations finance new exhibitions and special projects. A central fund is spread across the museums for pan-institutional initiatives, but each individual museum also has its own fundraising goals. As with any cultural institution, federal support fluctuates according to the political whim of Congress, and charitable donations are dependent on the overall health of the nation’s economy, the interests of wealthy donors, and the public relations goals of (and tax incentives for) corporations. Museum directors have to juggle different and sometimes conflicting priorities—including long-term care of the collections, public outreach, new research, and new exhibits—each fighting for a piece of a shrinking budgetary pie. In fiscal year 2013 alone, sequestration cut $42 million, or 5 percent, of the Smithsonian’s federal funding, forcing the institution to make painful choices. Without money, the Smithsonian can’t hire people. Without people, the Smithsonian can’t do its job.

For my part, I was one of about a dozen temporary fellows brought in to NMAH last year thanks to funding from Goldman Sachs. Spread out across the museum’s many departments, we were supposed to help “broaden and diversify the museum’s perspective and extend its capabilities far beyond those of its current resources.” For six months, I would take on the work of a full-time curator and try, however briefly, to stem the tide of collective forgetting.

When I arrived at NMAH in January of 2013, I felt as if I were reuniting with old friends. I had spent significant time in the engineering collection as a graduate student while researching my dissertation on the history of factory tours. Now, many years later, I spend much of my time in the classroom, lecturing to uninterested undergrads on the history of technology or pushing graduate students to think more broadly about the purpose of museums. I was looking forward to the chance to get my hands dirty, inspecting artifacts and doing real museum work.

I had underestimated the actual level of dirt. Despite the collections manager’s best efforts, construction dust from a building improvement project had seeped through the protective barriers and coated archival boxes filled with reference materials. As I pulled items off the shelves, puffs of it wafted up to my nose. My eyes watered and my nose itched as I wiped away the new dust that had silently settled upon the old. The strata of grime made the exercise nearly archaeological. My allergist would have been horrified.

On red letter days, I rediscovered national treasures that had spent years buried under the dust: the Empire State Building blueprints, for example, or the original plans for Grand Central Terminal. But I spent most days sifting through an onslaught of the mundane—the national rebar collection, engine indicators, or twenty years’ worth of one engineer’s daily planners. I began to better appreciate why no one had been eager to take on the task of sorting through all these shelves full of obsolete ideas. One typical spring morning, I slid open the glass doors of a storage case and pulled out sixteen patent models of boiler valves.

Trying to understand why, exactly, the museum had gone to the trouble of collecting so many boiler valves, I thumbed through Bulletin 173 of the United States National Museum, a catalog of the mechanical collections compiled back in 1939 to document the work done by the Division of Engineering’s founding curators. It appeared they were trying to document the progress in standardizing safety laws. The nineteenth century was the Age of Steam, giving rise to the locomotives and factories of the Industrial Revolution. Steam boilers generated a tremendous amount of power, but they were also notoriously treacherous. Frequent explosions often killed workers, but no legal codes existed to regulate construction and operation of pressure vessels until 1915, over one hundred years after they came into widespread use. In the rows upon rows of boiler valves on the shelves in front of me, I began to glimpse years of work by countless engineers desperately trying—with varying degrees of success—to devise a technological fix to a serious problem.

Curators are custodians of the past, but they must also collect the present in anticipation of the future. They grab hold of ideas, like the increasing importance of workplace safety at the tail end of the Industrial Revolution, and attempt to illustrate them with physical objects, like the boiler valves. Curators can’t always predict which new technologies or interests will guide future research, but they can nevertheless preserve the potential of a latent information and make it available to its future interpreters. Long after such artifacts have been rendered obsolete in the world outside the museum, the curators at NMAH hold them in trust for the American public.

I lined up all of the boiler valves in a row on a table in the reference room. At one point, they had told a story—a story those founding curators had hoped to share with audiences reaching far into the future—and my job now was to make sure that story could be rescued and retold by the curators and visitors whom I hoped would come after me. With minimal written information about the valves’ importance and provenance, I sat down to do what I was trained to do: read the objects.

Some looked like water faucets; others like Rube Goldberg contraptions. I picked up a bronze model that, with its cylindrical air chamber and protruding nozzles, vaguely resembled a toy water gun. The 1939 catalog indicated that this valve was patented in 1858 by a man named Frick. He had designed it to be an all-in-one machine, combining a valve with nozzles to relieve dangerous levels of pressure inside the boiler, an alarm to warn of any failure to do so, and water jets to extinguish the resulting fires. I updated the database with this information, wondering what type of inventor Frick was. I wondered if he felt the weight of his failure every time he read about another deadly boiler explosion in the newspaper. Maybe he smiled and patted himself on the back each day he didn’t. Finally, I clicked save and pushed the entry into the queue for legal review so that it could be put in the Smithsonian’s online collection database. Then I moved onto the next mute artifact waiting for me to give it a voice.

Curators are custodians of the past, but they must also collect the present in anticipation of the future. They grab hold of ideas, like the increasing importance of workplace safety at the tail end of the Industrial Revolution, and attempt to illustrate them with physical objects, like the boiler valves. Curators can’t always predict which new technologies or interests will guide future research, but they can nevertheless preserve the potential of latent information and make it available to its future interpreters.

Hours later, I stepped back from my notes. I now had object-level descriptions for each one of the models. Thanks to an inventory team working under temporary contracts, they all had been recently photographed as well. With a photo and a description, the valves could be put online, their digital records available to anyone with an Internet connection. Still, online patrons won’t be able to feel the weight of these artifacts in their hands, nor will they be viscerally overwhelmed by their numbers, as I was. They will have no physical sense of the abundance of artifacts, the piles of stuff that tell the stories of countless engineers working together and on their own to solve the problems of their times.

I sighed, collected my notes, and put the valves back in storage.

How will future historians—or even just curious museum visitors—see our current technological moment represented at NMAH? If the Smithsonian’s resource crunch continues down its current path, chances are they won’t see it at all. Without a curator, a collection cannot grow and evolve. Future visitors to the engineering collection will learn about coal mining in the nineteenth century, but there will be no objects helping them understand hydraulic fracking in the twenty-first. Researchers will be able to examine building plans of every station, bridge, and tunnel on every route once covered by the Pennsylvania Railroad, but they won’t see an example of a cable-stayed bridge, a design that has dominated in recent decades. They will be able to see designs for the 1885 Milwaukee municipal water supply system, but they won’t see the much more recent inventions that keep our drinking water safe today.

What’s more, NMAH is fast approaching a generational cliff. Sixty-eight percent of the staff is eligible to retire today, but the younger guard feels like the Baby Boomers will never leave. Some Boomers haven’t been able to save enough to retire, but many more are simply dedicated to the job and are unwilling to see their lives’ work be mothballed. They can’t imagine leaving without passing the torch to a replacement—replacements they know the museum can’t afford to hire. But the alternative isn’t any better; eventually, these aging curators will die without anyone to take their places. Their collections will join the dozens of orphans the Smithsonian already struggles to care for.

The new director and deputy director of NMAH understand the challenges they have inherited and are working toward stemming the tide. Their 2013 strategic plan lists “revitalizing and increasing the staff” as one of the four main priorities for the museum. Doubting that any more public funds will be forthcoming in the near future, NMAH—reflecting a trend at the Smithsonian more generally—is staking its future on private fortunes, courting billionaires to endow curatorial positions tied to blockbuster future exhibits. That leads to a backward hiring process, in which new curators are hired only after the exhibits they are tasked with planning are underway. And because already curator-less collections, like engineering, do not currently have the institutional voice required to push for and plan the kind of blockbuster exhibits that attract these endowed positions, they are left out of both exhibit halls and development plans. They are orphaned twice over.

Almost daily during my six months at NMAH, I was pulled into informal conversations about the future of the museum. Over lunch or in the hallway, I talked with curators and collections managers about where we might get a grant to extend the contracts of the inventory team. Sometimes we dreamed bigger, wishing for a new floor plan for the storage areas. Imagine if we could knock down the walls and open up the rooms, put in moveable shelving, and reunite collections that had been dispersed across the museum!

On days of extraordinary frustration—when everyone just felt overworked, underpaid, and overwhelmed by the challenges the museum faces—we all secretly harbored a desire to leak insider stories to The Washington Post or the General Accounting Office. But we almost always held back. Everyone who works for the Smithsonian loves the ideals the institution aspires to. They fundamentally believe in the founding mission: “For the increase and diffusion of knowledge.” We want to be able to have frank discussions about the needs of the museum, but we worry that if we speak too loudly, we might unjustly diminish its institutional authority. Even with its faults, the Smithsonian remains a seriously amazing place. But without public advocacy—which depends on public awareness—the Smithsonian’s curatorial crisis has no hope of being solved.

In March of 2013, about halfway into my Smithsonian residency, I gave a deliberately provocative presentation at the weekly colloquium where curators and visiting researchers give talks based on their current work. Although the colloquia are open to the public and occasionally attract graduate students and professors from local universities, they are mostly insider affairs. I titled my talk “Because Engineers Rule the World” and had NMAH’s social media coordinator live-tweet it to the museum’s thousands of Twitter followers. I showcased some of my favorite objects, gave a brief history of the engineering collection, and then outlined some suggestions for what its curatorial future might look like. It’s not hard to imagine two diverging possibilities. Down one dark path, the engineering collection becomes so neglected that it cannot be resurrected. No one knows how the collected technologies once worked or why future generations should study them. The objects become pieces of Steampunk art, beautiful, perhaps, but irrelevant to working engineers. A brighter future, on the other hand, might feature additional curatorial staff, perhaps funded by engineering professional societies or major companies. I had already tried to tip the scales in favor of engineering by offering NMAH’s development officer a list of Global 100 companies, highlighting objects of theirs that we had in the collection. We were preserving their history. The least they could do, it seemed, was help us pay for it.

The Smithsonian is not where we store the remnants of what we have forgotten. It’s where we go when we want to remember. Its curators help us access and interpret our country’s collective memory.

Robert Vogel, one of the last curators of engineering, now retired, sat staunchly in the audience listening to my talk. During the Q&A, he reminisced about the glorious engineering exhibits of the past, and I realized almost all of them had been dismantled and shipped to off-site storage over the past two decades. The last remaining public homage to engineering, the Hall of Power Machinery, sits forlornly in a dimly lit, uninviting back corner of the first floor’s east wing. For the intrepid visitors who wander into the space, the objects give a glimpse into the technological history of the Industrial Revolution. But since the exhibit cases are so old that they do not meet current safety standards, they cannot even be opened to change the long burnt-out light bulbs.

The week after my colloquium talk, I participated in the Smithsonian’s (somewhat) annual April Fool’s Day Conference on Stuff. This year’s theme was “grease”; it seemed only fitting that the engineering collection, which is filled with big greasy things, should be represented. I gave a tongue-in-cheek presentation on the history of gun control—grease guns, that is. In a seven-minute romp through the collections, I talked about jars of grease, bottles of grease, cans of grease, cups of grease, tubes of grease, and even two pots of grease that have been missing since the Congressionally mandated inventory of 1979 (they were in the vintage database, but I never could track them down). From the grease gun that came with the 1903 Winton convertible to the WWII-era M3 submachine gun, which earned the affectionate nickname “the greaser,” I drew attention to objects that had not been out of storage in decades.

Taken together, these two public talks created a momentum that knocked me out of my host department and propelled me into conversations with curators in art, medicine, education, programming, and others. My colleagues laughed at the April Fool’s Day lecture but were also astonished by the objects I’d uncovered and the latent connections that were just waiting to be made across the museum. I’d made my point: they could see what was being missed without a curator, what stories weren’t being told. More important, perhaps, my public acknowledgment that the engineering collections were orphaned unleashed a flood of response from museum staff. A curator in the Division of Medicine and Science stopped me in the hall to tell me, “Ten years without a curator? That’s nothing. We have collections that haven’t been curated in decades.”

With only a few weeks left in my fellowship, I decided I had to make a move. I drafted a two-line email, agonizing over every single word. Finally, I took a deep breath and hit send. Instant second-guessing. The Smithsonian is an institution with a deep hierarchical arrangement of authority, and I had just pole vaulted over four levels of supervisors, directors, and undersecretaries to email Dr. Wayne Clough.

Clough, a civil engineer by training, was president of Georgia Tech before accepting his appointment as the twelfth-ever Secretary of the Smithsonian. If I was going to make a case on behalf of the engineering collection, I figured now was the time. I invited him for a tour; somewhat to my surprise, he happily accepted. We emailed back and forth about possible dates before Grace, his scheduler, stepped in with a definitive time. May 30, 3 p.m.

My gamble had paid off, so I decided to roll the dice again and emailed Senator Heinrich. Jackpot! Catherine, the Senator’s scheduler, said he would love to stop by, barring any committee calls for votes.

I had about two minutes to daydream about the perfect meeting—three engineers getting together to talk shop. I imagined a conversation about how our engineering backgrounds prepared us for our current jobs, which were not technical at all. I thought we could talk about who today’s engineering rock stars are—people, companies, and ideas that NMAH should be collecting. We could discuss STEM education initiatives and how a national engineering collection could be featured within a history museum.

Then a flood of emails rushing into my inbox knocked me back to political reality. First came a message from the Smithsonian’s Office of Government Relations, then an email from the administrative assistant for the Director of Curatorial Affairs, then another from the scheduler for NMAH’s director—all grabbing for control over my tour. My frank conversation with fellow engineers was turning into a three-ring circus of administrators. But when I complained to a curatorial friend from one of the off-the-Mall Smithsonian museums, she shot back, “Are you kidding? These government relations people are exactly the ones you need to be talking to!” After all, she explained, they don’t get to spend just fifteen minutes with a single senator—they get to highlight institution-wide issues in reports that are read by all of Congress. “Now that you’ve got their attention,” my friend said, “you need to let them know engineering is being forgotten. Then maybe they can get the gears working to hire a curator.”

A week before the scheduled tour, I opened my inbox to heartbreak. “Dr. Clough will now be out of town next week,” Grace wrote. “Could we reschedule for June 6?”

For me, June 6 was too late. By then, my fellowship would be over. I’d have resumed my regular life as a professor, teaching a six-week summer course on comparative public history in England. In fact, the whole storeroom would be devoid of people. The contract for the collections inventory team had not been renewed, so no one would be able to finish imaging the objects and creating database entries. The engineering collections would return to hibernation.

But I wasn’t willing to let it go quite yet. I managed to arrange a meeting for late July, when I would be returning to D.C. for just a few days. I was determined to make one last plea for the collections.

When my plane landed at Dulles late on the Saturday night before the rescheduled tour, I turned on my phone and checked voicemail for the first time in six weeks. I found numerous messages from my Smithsonian supervisor, left in increasing levels of panic. “You’d better come in a day early to set things up,” he sighed when I finally got a hold of him the next day.

So that Wednesday, still a bit jet-lagged, I boarded the Metro and made the familiar trip to NMAH. I had been gone for less than two months, but my I.D. card had already expired, so my supervisor had to meet me in the lobby and escort me up the elevator to the staff-only floors. “Better watch out,” he warned me. “You’ve upset quite a number of people.”

The NMAH’s top curators had already decided my tour would not take place in engineering’s shabby storeroom. Rather, our VIPs would be the first to see a newly renovated, state-of-the-art showroom down the hall. Intended to house the agriculture and mining technologies collections (and completed before the funds could be poached to keep the museum open during sequestration), the new room was icy—not because they had managed appropriate climate control (a constant challenge for all museums), but because the collections manager was downright hostile. My shenanigans had meant that she had to vacuum away every speck of dust and generally eliminate any sign that this was an active workplace.

Outside in the corridor, I overheard two staff members talking about me: “Who does she think she is? Emailing the Secretary! Inviting a senator!” The privileges I had enjoyed as a pesky but eager Smithsonian insider were long gone. Now, my former colleagues saw me as merely a graceless interloper, ignoring rules I didn’t like, calling attention to workers who preferred to keep their heads down (and off the chopping block), and creating extra work for them in the process.

Before the tour, the Director of Curatorial Affairs gave me strict orders: under no circumstance was I allowed to speak for the future of NMAH’s collections. I wasn’t allowed to make any suggestions about what the museum should be collecting or what its direction should be. I wasn’t allowed to say anything that might prompt a question for which the secretary didn’t have an answer. If the senator asked me what he could do to help, I was not allowed to say. If asked directly, I should defer to someone with authority to speak to those issues.

Humbled but still nervously excited, I reported to the staff entrance the next day to meet my guests. John Gray, the director of NMAH, was already there, along with the personal assistant to the museum’s director of curatorial affairs and the Smithsonian Institution’s legislative affairs liaison. I was pleased to see that Gray was beaming. “So you’re the one who made this happen?” he asked, shaking my hand. I started to apologize about overstepping, but he cut me off. “Nonsense. This is the kind of thing we need to do more often. And why aren’t we showing them our old storerooms? They need to see our challenges.”

Secretary Clough arrived right on time with his suit jacket folded over his arm, a concession to D.C.’s brutal summer heat. All of the men quickly slipped out of their own jackets in show of deference to his authority. After quick introductions, we settled into a light-hearted banter. Twenty minutes later, Senator Heinrich arrived with his assistant, apologetic for his tardiness but still enthusiastic. “Have you ever been to the National Museum of American History?” I asked. “No? Well, you’re in for a special treat.”

Sure enough, when we walked into the gleaming showroom, the senator was like a kid in a candy store. Objects from the Panama Canal, a piece of the levee wall that collapsed during Hurricane Katrina, microcomputers—it was an engineering treasure trove. Heinrich immediately started examining Farmer’s windmill-battery patent model, trying to see how it worked.

Exactly six minutes into the tour, the senator’s assistant interrupted us. “A vote has just been called. We need to leave. Now.”

As Senator Heinrich started rushing down the hall, he called out, “When can I come back with my kids?”

“Anytime!” I yelled after him, not mentioning that I wouldn’t be there to show them around.

I returned to the showroom, where the Secretary had been talking to the Director and other assembled staff members. He turned to me and asked, regarding a slightly ragged pile of papers, “Why those?”

I confessed that even though it wasn’t very flashy, it was one of my favorite objects. “In 1903, an engineer took a drafting course by correspondence school. Those are his notes. I think it offers insightful parallels to today’s debates over online education.”

Secretary Clough flipped through the exercises, reminiscing about his own drafting coursework. Like the student from the past, he was always being reprimanded for his imprecise handwriting and varyingly thick lines. When he leaned forward to take a closer look at the school’s name, a smile slowly spread across his face. “International Correspondence School. ICS. That’s how my dad got his degree. He worked on banana boats out of New Orleans. Back and forth to South America, he decided to earn a certificate in refrigeration and marine engine maintenance.”

He stepped back and looked directly at me. “What can I do to help engineering?”

I smiled, biting my tongue. “You need to talk to the director of curatorial affairs about that, sir.”

Barely a week after the tour, Gray forwarded an email from Clough to NMAH’s entire staff. In response to upheaval in the federal budget negotiations, the secretary instituted a hiring freeze across all of the Smithsonian’s museums. It was scheduled to last for ninety days, but everyone knew it would likely drag on much longer. In September, Clough announced he would retire in the fall of 2014, meaning the curatorial crisis will have to be handled by the next Secretary.

By chance, I had the opportunity to talk to Secretary Clough again. While attending a workshop sponsored by the Museum Trustee Association last fall, I slipped out of a less-than-stimulating session on institutional endowments. As I wandered through the halls of Dumbarton House, an eighteenth-century house in the heart of Georgetown, I spotted Clough, who was scheduled to give the lunchtime keynote address. He had recently announced his resignation from the Smithsonian, so I asked him what the recruiting firm was looking for in his replacement. He smiled and said nonchalantly, “You know, the usual. Someone who walks on water.”

It was October 4, 2013, and Clough began his address to the group by noting that the Smithsonian is usually open 364 days a year. The government shutdown had already closed the museum for four days, which ultimately stretched to over two weeks. Clearly, there is no returning to a mythic Golden Age when Congress could supply adequate funding for the Smithsonian’s needs, and dreaming of such a thing would be counterproductive. But, with a wry smile, Clough noted that coming up with an innovative solution would now be a job for his replacement.

The truth is, no matter how much Clough or his successor loves engineering (or any other collection in the Smithsonian’s nineteen museums, nine research centers, and one zoo), there’s very little the secretary can do to save them. The slow creep of the curatorial crisis is not something one person, no matter how powerful within the institution or the government, can reverse quickly. Faced with shrinking budgets and a dwindling staff, the Smithsonian’s curators will almost certainly have to come up with new ways of doing their jobs, but their fundamental tasks, no matter the budget constraints or the day-to-day challenges, will remain the same. The Smithsonian is not where we store the remnants of what we have forgotten. It’s where we go when we want to remember. Its curators help us access and interpret our country’s collective memory, whether that means organizing exhibits of physical objects, creating digital databases, or just dusting off overlooked artifacts and writing down what they see. Curators are worth fighting for. They help us remember, and they don’t let us forget.

Allison Marsh (marsha@mailbox.sc.edu) is an assistant professor of history at the University of South Carolina, where she oversees the museums and material culture track of the public history program. Lizzie Wade (lizziewade@outlook.com) is the Latin America correspondent for Science magazine. Her writing has also appeared in Aeon, Wired, and Slate, among other publications. She lives in Mexico City.

56

Aspidonia Ernst Haeckel in his work Kunstformen der Natur (1899–1904), grouped together these specimens, including trilobites (which are extinct) and horseshoe crabs, so the viewer could clearly see similarities that point to the evolutionary process.

Little Cell, Big Science: The Rise (and Fall?) of Yeast Research

NIKI VERMEULEN

MOLLY BAIN

Trying to add another chapter to the long history of yeast studies, scientists at the cutting edge of knowledge confront the painful realities of science funding.

Manchester, the post-industrial heart and hub of north England, is known for football fanaticism, the pop gloom and swoon of The Smiths, a constant drizzle of dreary weather, and—to a select few—the collaborative work churning in an off-white university building in the city’s center. Given the town’s love of the pub, perhaps it’s a kind of karma that the Manchester Centre for Integrative Systems Biology (MCISB) has set up shop here: its researchers are devoted to yeast.

Wayne Aubrey, a post-doc in his early thirties, with a windswept foppish bob and round, kind brown eyes, is one of those special researchers. Early on a spring morning in 2012, Wayne moved swiftly from his bike to a small glass security gate outside the Centre. Outdoorsy and originally from Wales, he’d just returned from a snowboarding trip, trading in the deep white of powder-fresh mountains for the new, off-white five-story university outpost looming in front of him.

The building, called the Manchester Interdisciplinary Biocentre (MIB), features an open-plan design, intended to foster collaboration among biochemists, computer scientists, engineers, mathematicians, and physicists. The building also hosts the Manchester Centre—the MCISB—where Wayne and his colleagues work. Established in 2006 and funded by a competitive grant, the MCISB was intended to run for at least ten years, studying life by creating computer models that represent living organisms, such as yeast. At its height in 2008, the multidisciplinary Centre housed about twelve full-time post-docs from a wide array of scientific fields, all working together—an unusual, innovative approach to science. But in the age of budget cuts and changing priorities, the funding was already running out after only six years, leaving the Centre’s work on yeast, the jobs of Wayne’s colleagues, and Wayne’s own career path hanging in the balance.

Nevertheless, for the moment at least, there was work to be done. Wayne climbed three flights of stairs and grabbed his lab coat. Wayne’s background, like that of most of his colleagues at the Centre, is multidisciplinary: before coming to Manchester, he was involved in building “Adam,” a robot scientist that automates some experiments so human scientists don’t have to conduct and run each one. This special background in both biology and computer science landed Wayne the job in the Manchester Centre.

But on this day, his lab work was that of a classic “wet biologist.”

As he unstacked a pile of petri dishes into a neat line, Wayne reported with a quick smile, “We grow a lot of happy cells.” The lab is like a sterile outgrowth of or inspiration for The Container Store: glass and plastic jars, dishes, bottles, tubes, and flasks, all with properly sized red and blue sidekick lids, live on shelves above rows of lab counters. Tape and labeling stickers protrude somewhat clumsily from various drawers and boxes; they get a lot of use. Small plastic bins, like fish tackle boxes with near translucent tops, sit at every angle along the counters. Almost as a necessary afterthought, even in this multidisciplinary center, computer monitors are pushed in alongside of it all, cables endlessly festooned between shelves and tables.

Wayne picked up a flask of solution, beginning the routine of pouring measured amounts into the dishes’ flat disks. “The trick,” he noted, “is to prevent bubbles, so that the only thing visible in the dishes later is yeast.”

Every lab has its own romances and rhythms, its own ideals and intrigues—and its own ritualistic routines of daily prep.

As one of the organisms often used for biology, yeast has been at the lab bench for centuries. You could say it’s the little black dress of biology labs. Biologists keep it handy because, as a simple, single-cell organism, yeast has proven very functional, versatile, and useful for all sorts of parties—if, that is, by “parties,” you mean methodical and meticulous experiments, each an elaborate community effort.

How yeast became the microbiologist’s best friend has everything to do with another party favorite (and big business): alcohol. Industrialists and governments alike paid early biologists and chemists to tinker with the fermentation process to figure out why beer and wine spoiled so easily. The resulting research tells an important story, not only about the development of modern biology, but about the process of scientific advance itself.

In modern labs, yeast is most often put to work for the secrets it can unlock regarding human health. Through new experiments and computer models, systems biologists are mapping how one cell of yeast functions as a living system. Though we know a lot about the different components that make up a yeast cell—genes, proteins, enzymes, etc.—we do not yet know how these different components interact together to comprise living yeast. The researchers working at the Manchester Centre are studying this living system, trying to understand these interactions and make them visible. Their ultimate goal: to create a computer model of yeast that can show how the different elements interact. Their hope is that this model of yeast will shed light on how more complex systems—a heart, say, or a liver, or even a whole human being—function, fail, survive, and thrive.

But a model of yeast would demand and have to make sense of enormous quantities of data. Next to the petri dishes, Wayne set up pipettes, flasks, and a plastic tray, his behavior easy and measured. But this experiment, complicated and time-sensitive, couldn’t be performed solo. Wayne was waiting for Mara Nardelli, his lab partner and fellow yeast researcher. When she arrived, they would begin a quick-paced dance from dish to dish, loading each with sugar to see exactly how quickly yeast eats it. They hoped the experiment would give them clear numerical data about the rate of yeast’s absorption of sugar. If it were to match other findings, then perhaps it might be useful to Wayne and Mara’s other lab mates—fellow systems biologists who were attempting to formulate yeast and its inner workings into mathematical terms and then translate those terms into visual models.

Devising a computerized model of yeast has been the main goal of the Manchester Centre—but after six years, it seems, it’s still a ways off. Farther off, to be honest, than anyone had really expected. Yeast seemed fairly straightforward: it is a small, simple organism contained completely in one cell. And yet, it has proved surprisingly difficult to model. Turns out, life—even for as “simple” an organism as yeast—is very complex.

It seems odd, Wayne conceded, that after hundreds of years of research, we still don’t know how yeast works. “But,” he countered, “yeast remains the most well understood cell in the world. I know more about yeast than about anything else, and there are probably as many yeast biologists in the world as there are genes in yeast—which is six thousand.” The sheer number of genes helped Wayne underscore yeast’s complexity: “To fully map the interactions, you have to look at the ways in which these 6,000 genes interact, so that means (6000×6001)/2, which is over 18 million potential interactions. You cannot imagine!” And the complexity doesn’t stop there, Wayne continued: “Now you have the interactions between the genes, but you would still need to add the interactions of all the other components of the yeast cell in order to create a full model.” That is, you’d have to take into account the various metabolites and other elements that make up a yeast cell.

Despite the travails of the yeast modelers, the field of systems biology still hopes to model even more complex forms of life to revolutionize medicine: British scientist and systems biology visionary Denis Noble has had a team working on a virtual model of the heart for more than twenty years, while a large national team of German researchers, funded by the German Federal Ministry, is developing a virtual liver model. Yet, if we cannot even understand how a single-cell organism functions as a living system, one might reasonably ask whether we can safely scale up to bigger, more complex organisms. Could we really—as some scientists hope—eventually model a complete human being?

Moreover, will we be able to use these models to improve human health and care? Will systems biology bring us systems medicine? The American systems biologist Leroy Hood, who was recently awarded the prestigious National Medal of Science by President Obama, has called this idea “P4 medicine”—the p’s standing for Predictive, Preventive, Personalized, and Participatory. He imagines that heart and liver models could be “personalized,” meaning that everybody would have his or her own model, based on individual genetic and other biological information. Just as you can now order your own genetic profile (for a price), you would be able to have a model of yourself, which doctors could use to diagnose—or even “predict”—your medical problems. Based on the model, “preventive” measures could also be taken. And it’s not only medical professionals who could help to cure and prevent disease, but the patient, too, could “participate” in this process.

For instance, in the case of heart diseases, a model could more clearly show existing problems or future risks. This could help doctors run specific tests, perform surgery, or prescribe medicine; and patients themselves could also use special electronic devices or mobile phone apps to measure cholesterol, monitor their heartbeat, and adjust eating patterns and physical activity in order to reduce risks. Or a patient could use a model of his or her liver—the organ that digests drugs—to determine what drugs are most effective, what sort of dose to take, and at what time of the day. Healthcare would increasingly become more individually tailored and precise, and people could, in effect, become their own technology-assisted doctors, managing their own health and living longer, healthier lives because of it.

It sounds pretty amazing—and if the systems biologists are right about our ability to build complex models, it could be reality someday. But are they right, or are they overly optimistic? Researchers’ experience with yeast suggests that a personalized model of your liver on ten years of antidepressants and another of your heart recovering from invasive valve repair could be further off than we’d like. They might even be impossible.

It was turning out to be hard enough to model the simple single-cell organism of yeast. But doing so might be the crucial first step, and the yeast researchers at the Centre weren’t ready to give up yet. Far from it. As Wayne finished setting up the last of the petri dishes, Mara walked in. Golden-skinned, good-natured, and outgoing, she was the social center of the MCISB. For a moment, she and Wayne sat at the lab bench, reviewing the preparations for the day’s experiment in a sort of conversational checklist.

Then Mara stood, tucking away her thick Italian curls and looking over the neatly arranged high-tech Tupperware party. She sighed and turned back to Wayne. “Ready?”

People had been using yeast—spooning off its loamy, foamy scum from one bread bowl or wine vat and inserting it in another—for thousands of years before they understood what this seething substance was or what, exactly, it was doing. Hiero-glyphs from ancient Egypt already suggested yeast as an essential sidekick for the baker and brewer, but they didn’t delineate its magic—that people had identified and isolated yeast to make bread rise and grape juice spirited was magic enough. As the great anatomist and evolutionary theory advocate Thomas Henry Huxley declared in an 1871 lecture, “It is highly creditable to the ingenuity of our ancestors that the peculiar property of fermented liquids, in virtue of which they ‘make glad the heart of man,’ seems to have been known in the remotest periods of which we have any record.”

All the different linguistic iterations of yeast—gäscht, gischt, gest, gist, yst, barm, beorm, bären, hefe—refer to the same descriptive action and event: to raise, to rise, to bear up with, as Huxley put it, “‘yeasty’ waves and ‘gusty’ breezes.” This predictable, if chaotic and muddy, pulpy process—fermentation—was also known to purify the original grain down to its liquid essence—its “spirit”—which, as Huxley described it, “possesses a very wonderful influence on the nervous system; so that in small doses it exhilarates, while in larger it stupefies.”

Though beer and wine were staples of everyday living for thousands and thousands of years, wine- and beer-making were tough trades—precisely because what the gift of yeast was, exactly, was not clear. Until about 150 years ago, mass spoilage of both commercial and homemade alcoholic consumables was incredibly common. Imagine your livelihood or daily gratification dependent on your own handcrafted concoctions. Now, imagine stumbling down to your cellar on a damp night to fetch a nip or a barrel for yourself, your neighbors, or the local tavern. Instead you’re assaulted by a putrid smell wafting from half of your wooden drums. You ladle into one of your casks and discover an intensely sour or sulfurous brew. In the meantime, some drink has sloshed onto your floor, and the broth’s so rancid, it’s slick with its own nasty turn. What caused this quick slippage into spoilage? This question enticed many an early scientist to the lab bench—in part because funding was at the ready.

In a 2003 article on yeast research in the journal Microbiology, James A. Barnett explains that because fermentation was so important to daily life and whole economies, scientific investigations of yeast began in the seventeenth century and were formalized in the eighteenth century, by chemists—not “natural historians” (as early biologists were called)—who were originally interested in the fermentation process as a series of chemical reactions.

How yeast became the microbiologist’s best friend has everything to do with another party favorite (and big business): alcohol.

In late eighteenth-century Florence, Giovanni Valentino Fabbroni was part of the first wave of yeast research. Fabbroni—a true Renaissance man who dabbled in politics and electro-chemistry, wrote tomes on farming practices, and helped Italy adapt the metric system—determined that in order for fermentation to begin, yeast must be present. But he also concluded his work by doing something remarkable: Fabbroni categorized yeast as a “vegeto-animal”—something akin to a living organism—responsible for the fermentation process.

Two years later, in 1789 and in France, Antoine Lavoisier focused on fermentation in winemaking, again regarding it as a chemical process. As Barnett explains, “he seem[ed] to be the first person to describe a chemical reaction by means of an equation, writing ‘grape must = carbonic acid + alcohol.’” Lavoisier, who was born into the aristocracy, became a lawyer while pursuing everything from botany to meteorology on the side. At twenty-six, he was elected to the Academy of Sciences, bought part of a law firm specializing in tax collection for the state, and, while working on his own theory of combustion, eventually came to be considered France’s “father of modern chemistry.” The French government, then the world’s top supplier of wine (today, it ranks second, after Italy), needed Lavoisier’s discoveries—and badly, too: France had to stem the literal and figurative spoiling of its top-grossing industry. But as the revolution took hold, Lavoisier’s fame and wealth implicated him as a soldier of the regime. Arrested for his role as a tax collector, Lavoisier was tried and convicted as a traitor and decapitated in 1794. The Italian mathematician and astronomer Joseph-Louis Lagrange publicly mourned: “It took them only an instant to cut off his head, and one hundred years might not suffice to reproduce its like.”

Indeed, Lagrange was onto something: the new government’s leaders were very quickly in want of scientific help for the wine and spirits industries. In 1803, the Institut de France offered up a medal of pure gold for any scientist who could specify the key agent in the fermenting process. Another thirty years passed before the scientific community had much of a clue—and its discovery tore the community apart.

By the 1830s, with the help of new microscope magnification, Friedrich Kützing and Theodor Schwann, both Germans, and Charles Cagniard-Latour, a Frenchman, independently concluded that yeast was responsible for fermenting grains. And much more than that: these yeasts, the scientists nervously hemmed, um, they seemed to be alive.

Cagniard-Latour focused on the shapes of both beer and wine yeasts, describing their cellular bulbous contours as less like chemical substances and more resembling organisms in the vegetable kingdom. Schwann pushed the categorization even further: upon persistent and continued microscopic investigations, he declared that yeast looks like, acts like, and clearly is a member of the fungi family—“without doubt a plant.” He also argued that a yeast’s cell was essentially its body—meaning that each yeast cell was a complete organism, somewhat independent of the other yeast organisms. Kützing, a pharmacist’s assistant with limited formal training, published extensive illustrations of yeast and speculated that different types of yeast fermented differently; his speculation was confirmed three decades later. From their individual lab perches, each of the three scientists concluded the same thing: yeast is not only alive, but it also eats the sugars of grains or grapes, and this digestion, which creates acid and alcohol in the process, is, in effect, fermentation.

This abrupt reframing of fermentation as a feat of biology caused a stir. Some chemist giants in the field, like Justus von Liebig, found it flat out ridiculous. A preeminent chemistry teacher and theorist, von Liebig proclaimed that if yeast was alive, the growth and integrity of all science was at grave risk: “When we examine strictly the arguments by which this vitalist theory of fermentation is supported and defended, we feel ourselves carried back to the infancy of science.” Von Liebig went so far as to co-publish anonymously (with another famous and similarly offended chemist, Friedrich Wöhler) a satirical journal paper in which yeasts were depicted as little animals feasting on sugar and pissing and shitting carbonic acid and alcohol.

Though he himself did little experimental research on yeast and fermentation, von Liebig insisted that the yeasts were just the result of a chemical process. Chemical reactions could perhaps produce yeast, he allowed, but the yeasts themselves could never be alive, nor active, nor the agents of change.

Von Liebig stuck to this story even after Louis Pasteur, another famous chemist, took up yeast study and eventually became the world’s first famous microbiologist because of it.

These long-term investigations into and disciplinary disputes about the nature of yeast reordered the scientific landscape: the borders between chemistry and biology shifted, giving way to a new field, microbiology—the study of the smallest forms of life.

Back in modern Manchester, Mara and Wayne danced a familiar dance. Behind the lab bench, their arms swirled in clocklike precision as they fed the yeast cells a sugar solution in patterned and punctuated time frames, and then quickly pipetted the yeast into small conical PCR tubes.

Soon, Mara held a blue plastic tray of upside-down conical tubes, which she slowly guided into the analysis machine sitting on top of the lab counter. The machine looked like the part of a desktop computer that houses the motherboard, the processor, the hard drive, its fan, and all sorts of drives and ports. It’s the width of at least two of these bulky and boxy system units, and half of it is sheen with tinted windows. Thick cables and a hose streamed from its backside like tentacles.

Biologists have a favorite joke about yeast: like men, it’s only interested in two things—sex and sugar. Wayne explained, “This is because yeast has one membrane protein, or cell surface receptor, that binds sugar and one that binds the pheromone of the other mating type.” Sugar uptake is what Mara and Wayne have been investigating: the big machine scans, measures, and analyzes how and how quickly the yeast has its way with sugar. Results appear on the front screen, translating the experiment’s results into numerical data and graphs. The results for each set of tubes are cast into a graph, showing the pattern of the yeast cells’ sugar uptake over time. Usually, the graphs show the same types of patterns—lines slowly going up or down—with little variance from graph to graph. If great discrepancies in the patterns emerge, then the scientists usually know something went wrong in the experiment. Mara and Wayne and the Centre’s “dry biologists” (those who build mathematical models of yeast with computers) hoped that understanding how yeast regulates its sugar uptake would help them better understand how the cells grow. Yeast cells grow quickly and can be seen as a proxy for human cell generation because processes in both cells are similar. So much about yeast, Wayne explained, is directly applicable to human cells. If we know more about how yeast cells work, we’ll have a better sense of how human cells function or malfunction. Understanding the development and growth of yeast cells can be translated to growth in healthy human cells, as well as in unhealthy human cells—like malignant cancer cells.

Biologists have a favorite joke about yeast: like men, it’s only interested in two things—sex and sugar.

Mara has been at the lab working with yeast for a long time. In many ways, Mara has been with the Centre before it even became the Centre. Fourteen years ago, in 1998, Mara was wooed away from her beloved Italy for a post-doc in Manchester. Even though she moved here reluctantly from Puglia, a particularly sweet spot on the heel of Italy’s boot, she became a Mancunian—an inhabitant of Manchester. Her childhood sweetheart followed her to Manchester, and together they had a son, who himself wound up at the University of Manchester. Mara even wrote a blog about Manchester living for fellow Britain-bound Italians.

Mara’s research background was in biochemistry. She obtained her PhD from the University of Naples Federico II and University of Bari, and then continued in Bari as a Fellow of the Italian National Research Council (CNR), working on gene expression in human and rat tumors. The professor with whom she worked as a post-doc in Manchester, John McCarthy, was one of the key minds behind the MIB, so she’s been with the Manchester Interdisciplinary Biocentre since its start in 2006. Within the MIB, she became part of the Manchester Centre for Integrative Systems Biology (MCISB), which was set up by Professor Douglas Kell, who—also due to Manchester’s tradition in yeast research and its dedication to the new biosciences—won a national funding competition to create a center to make a model of yeast in the computer. The MCISB quickly attracted new professors and researchers. The core group of scientists to work with yeast, comprised largely of twenty- and thirty-somethings, consisted of both “wet” scientists, who experiment with life in the lab, and “dry” scientists, who work behind computers, building and revising models based off the results from the wet scientists’ experiments. Mara supported its teaching program, mentored PhD students, and assisted post-docs like Wayne, helping to conduct experiments in the right way.

The core group of MCISB’s researchers shared, essentially, one large office not far from the lab space. This was intentional; the building’s architects had created a space that would foster innovation and discovery in biology. As Wayne described it, “It’s a very co-ed sort of approach to the building. You encounter and bump into people more frequently because of the layout of the building. You are more eager to go and speak to people, and ask them, you know, how do I do this, how do I use that, or who belongs to this piece of equipment.”

Over the years, instead of toiling away in separate labs under separate professors, the yeast researchers—working in one big room together on one unified project—felt the uniqueness of both their endeavor and community. Thursday afternoons, the team would often go out to Canal Street, in Manchester’s “gay village,” for a couple of beers. Friendships developed. Two of them even fell in love and got married. “We became a big family,” said one of them, and the others all agreed.

Beginning the long wait for their results, Mara and Wayne cleaned up the bench in an easy quiet, gathering up used petri dishes and pipettes, and nodding to each other as they went.

Louis Pasteur liked his lab in Arbois.

Unlike most nineteenth-century laboratories of world-class scientists, the Arbois lab was small and light and simple. Long, fine microscopes were pedestaled on clean, sturdy wooden tables. While working on his lab logs, Pasteur, whose neat beard across his broad face accentuated the stern downward pull of the corners of his mouth, could sit on a bowed-back chair and look out onto pastoral rolling hills, speckled with vineyards. The lab also had the great advantage of being near his family home.

Pasteur was born in Dole, in the east of France, in 1822, but when he was about five, his father moved the family south to Arbois to rent a tannery—a notoriously messy and smelly trade. The area was known for its yellow and straw wines and its perch on the Cuisance River. Pasteur spent his childhood there.

Arbois is where, as a child, Pasteur declared he wanted to be an artist. It’s where he moved back from a Parisian boarding school at the age of sixteen, declaring he was homesick. And it’s where he’d return to spend almost every summer of his adult life. Eventually, Pasteur would bury his mother, father, and three of his children—two of whom died of typhoid fever before they were ten—in Arbois. And it was inArbois—not in his lab at the prestigious École normale supérieure in Paris, nor in the lab at his university post in Lille—where he bought a vineyard and set up a lab to test his initial ideas about wine and its fermentation.

Before Pasteur developed—which is to say, patented and advocated, just like a twenty-first-century entrepreneurial scientist—“pasteurization” as a way of reducing harmful bacteria in foods and beverages, and before he introduced (and campaigned for) his “germ theory of disease,” which led him to develop the rabies vaccine, Pasteur first worked as a chemist on yeast, specifically researching the fermentation process in wine and spirits.

During the Napoleonic wars at the beginning of the nineteenth century, France’s alcohol industry was dangerously imperiled: Lavoisier, the leading fermenter scientist, had been decapitated, and Britain had cut off France’s supply of cane sugar from the West Indies. Not only were French beer-producers and winemakers, who had wheat and grapes aplenty, still struggling with spoiling yields, but now the spirits-makers had no sugar to wring into hard alcohol. So, in a serious fix, France began cultivating sugar from beets instead. This helped, but forty to fifty years later, when Lille had become France’s capital of beet production, spirits-producers and winemakers alike were still struggling with spoilage; nobody in the alcohol industry knew how to contain or control fermentation.

Lille also happened to be the place where Pasteur worked as a chemistry professor. When one of Pasteur’s students introduced his professor to his father, a spirits-man with fermentation woes, Pasteur suddenly had access and funding to get up close and personal with yeast. He began to watch and parse apart its fermentation, quickly concluding in an 1860 paper that the Berliner Theodor Schwann had been correct decades earlier: yeast was a microbe, a bacterium. In short: alive. He also argued that it was yeast that was so essential to fermentation: its “vital activity,” Pasteur argued, caused fermentation to both begin and end.

Yeast has operated a bit like an oracle over the past two hundred-plus years for many a scientist. It didn’t only convert sugars to alcohol; it also converted Pasteur from chemist to biologist.

More specifically, Pasteur became a microbiologist. The resolution of the discipline-wide fight over the nature of yeast—particularly whether or not it was “vital,” that is, “living”—helped produce two new fields: microbiology and biochemistry. It awakened the scientific community to new possibilities and questions: what other kind of life happens on a small scale, and what can be said about the chemistry of life?

Like a living organism, collaboration is a complex system, and in the absence of nourishment—that is, funding—it falls apart and breaks down.

Though Pasteur was catching all sorts of flak over it, by the early 1860s, his fermentation work also caught the attention of an aide to the emperor. The aide was increasingly concerned about the bad rap France’s chief export was accumulating across Europe. If yeast was the key actor in fermenting all alcohol, was it at all related to what most vintners at the time thought damaged and spoiled their wines—what they called l’amer, or “wine disease”? Could it be that yeast was both creator and culprit of this disease?

With a presidential commission at his back (Napoleon III was both the last emperor and first president of France), Pasteur set out on a tour of wineries across France. Though it may have been during this sojourn that Pasteur spun the line “A bottle of wine contains more philosophy than all the books in the world,” a drunken holiday this was not. Pasteur solemnly reported back to the crown: “There may not be a single winery in France, whether rich or poor, where some portions of the wine have not suffered greater or lesser alteration.”

And with this, his initial fieldwork completed, Pasteur set up shop in his favorite winery region, Jura, home to, of course, Arbois.

With his light brown eyes, framed by an alert brow and well-earned bags, Pasteur alternately gazed out the window of his rustic laboratory and then down through one of his long white microscopes. Again and again, he watched yeast cavort with grape juice in its fermentation dance. But he knew there was at least one other big player whose influence he had yet to understand fully: air.

Excluding air from the party and allowing it in methodically, he found that exposing yeast and wine to too much air inevitably invites in airborne bacteria, which break down the alcohol into acid, resulting in vinegar. (With one eye on knowledge and one on practical application, Pasteur quickly passed this information on to the vinegar industry.) Air allowed in too much riffraff. In order to keep wine fine, the event had to remain exclusive—or you needed some kind of keen agent to kill the interlopers systematically. It didn’t take Pasteur too long to identify this discriminating friend: heat. Heating the wine, slowly, to about 120°F would kill the bacteria without destroying the taste of the wine.

Vintners at first found this idea near sacrilege. Many resisted, but after their competitors who adopted the method had bigger, better yields, they quickly complied. In fact, this procedure not only revolutionized winemaking and beer-brewing, saving France’s top export and industry, it was also the beginning of the pasteurization craze and Pasteur’s further work on microorganisms as the germs that transmit infectious disease. Science, profit-making, and improved public health turned out to be mutually reinforcing, each propelling the other forward.

Mara walked over to the analysis machine to check the progress of the experiment. Points and curves had begun to appear on the screen—each yeast cell’s inner workings translated as numbers and lines.

But Mara was not happy. Comparing these results with the results of two previous experiments, she saw that the difference was big—too big. “Something must have gone wrong,” she said.

She beckoned Wayne over, and he quickly agreed. “No, this does not look how it should.”

As Pasteur could attest, those who work in labs, wearing white coats, know that experiments often do not work out. It is certainly not the sign of a bad scientist, but it does make lab work tedious. In a speech he gave in his birth place, Dole, Pasteur spoke of his father’s influence in his own lab-bound life: “You, my dear father, whose life was as hard as your trade, you showed me what patience can accomplish in prolonged efforts. To you I owe my tenacity in daily work.”

Getting things right—isolating a potential discovery, testing it, and retesting it—often requires endless attempts, dogged persistence, and the ability to endure a lot of cloudy progress. Wayne contextualized these results thus: “This happens all the time. There is a lot of uncertainty. [It’s as though] failed experiments do not exist: if experiments don’t work, they’re never published, so you don’t know. So you have to reinvent the wheel yourself and develop your own knowledge about what works and what does not. Molecular biology is not like the combustion engine—where billions of pounds have been spent to understand the influence of every parameter and variable. There are still many unknowns in biology, and established methods do not work in every instance.”

Wayne and Mara were in good company, though, which provided some comfort. Working at the Centre, they were not only a part of a community of like-minded scientists, but also a part of a global network of scientists, all working on the modeling of yeast. In 2007, the MCISB hosted and organized a “yeast jamboree,” a three-day all-nighter—a Woodstock for yeast researchers from around the world. The jamboree resulted in a consensus on a partial model of yeast (focused on its metabolism) and a paper summarizing the jamboree’s findings, which have been cited by fellow systems biology researchers more than three hundred times to date. The “yeast jamboree” was so productive that it inspired another jamboree conference—this one focused on the human metabolic system.

But despite the jamboree’s high-profile success, yeast’s unexpected level of complexity has been a source of frustration for researchers trying to model the whole of it precisely. Three years ago, the Centre’s yeast team started to address this challenge by revising their approach: instead of looking at all of yeast’s genes and determining which proteins each makes and what activity these proteins perform, the team began, first, to identify each activity and, then, research the mechanism behind it. But even with this revised tactic and the doubling down of efforts, a complete model of yeast was not yet done, and time and funding were running out. The promise of extending the grant for another five years was broken when the University, after the global financial crunch, reoriented its priorities. As a result, the funding was almost gone.

“Well,” Mara continued. “These results are clearly not what we are looking for. We have to do it all over again. When do we have time?”

A year later, on a warm evening in June 2013, Wayne was again working behind the same lab bench.

As much as he enjoyed his work, he could imagine doing other things than sitting in a lab until 10 p.m.: “Read a book, sit in the sun, go to the pub,” he shrugged then gave a wee grin. “You know, have a family.”

Wayne was not working on yeast at the moment, however; he was running a series of enzymology experiments on E. coli for a large European project, trying to finish the results in time for a meeting in Amsterdam. The European grant was covering his salary. That was the only reason he still had a job at MCISB. Other post-docs of the Centre were not so lucky. “There was so much expertise,” Wayne reflected. “It was a good group, and now it has become much smaller. Before, it was much more cohesive; it had much more of a team feel about it. Now the group is fragmented…. Everybody is working on different things and in different projects.”

Soon, Wayne would leave Manchester, too, for a lectureship at Aberystwyth University, back home in his seaside Wales. It’s a big deal accomplishment, and Wayne is grateful and thrilled for the opportunity, even as he regrets the loss of the community at MCISB.

The trouble with a burgeoning research lab that ends its work prematurely is that the institutional knowledge and expertise built up in the collaboration are hard to codify, box up, and ship elsewhere. Ideas and emerging discoveries are, in part, relationally based, dependent on the complex interactions and conversations continually rehearsed and refined in a living community—an aspect of the “scientific method” that is rarely remarked upon. These communities can be seen as “knowledge ecologies”: a community cultivates a particular set of expertise and insight that, Wayne explained, includes knowing not only what works, but what doesn’t. As in an ecosystem, a disruption of a particular component in a sprawling chain of connections can affect the health of the whole. Like a living organism, collaboration is a complex system, and in the absence of nourishment—that is, funding—it falls apart and breaks down. As a result, the human capital specific to this community and project with its knowledge of yeast—and especially the collaborative understanding built up around that endeavor—has been lost.

And yet, given all the difficulties the lab had encountered in trying to build a model of yeast, and given that funding for science is never unlimited, and that its outcomes are never predictable, how is it possible to know whether an approach should be abandoned as a dead end or whether it just needs more money and more time to bear fruit?

Wayne fiddled a final pipette into a plastic tray and traipsed toward the analysis machine.

He waited for the graphs to appear. He would repeat this trial three more times before the night was through.

The lab bench next to his sat empty. It had been Mara’s, but above it now hung a paper sign with another name scrawled on it.

Much of Mara’s yeast work had been aimed at understanding how yeast cells grow, which the researchers had hoped might offer insight into how cancer cells grow. Ironically, while Mara was working on those experiments, her own body was growing a flurry of cancer cells. In 2012, Mara discovered she had bowel cancer, which she first conquered, only to become aware of its return in April 2013. It then quickly spread beyond control.

Mara died at the end of May in 2013. Those remaining at the Centre were devastated by her loss—Mara had been such a young, vibrant, and central presence in the community, and she was gone a year after getting sick. The researchers left at the Centre and other colleagues from the university rented a bus so they could all go to her funeral together.

Wayne was dumbfounded by Mara’s absence. When asked what he learned from her and what he had been feeling with her gone, he replied that though she was “very knowledgeable” and he had “worked with her loads,” right now, he “could hardly summarize.”

Without continued and concentrated funds for the Centre, its future is a little uncertain. Not only is a complete model of yeast still out of reach, so, too, are the insights and contributions such a model might hold for cancer research, larger organ models, the improvement of healthcare, and the entire systems biology community.

Determining that yeast was a living organism took about two hundred years—but it also took more than that. While Pasteur may seem like a one-man revolution, he was also part of a collaboration, albeit one across countries and time. His work built on the works of Kützing, Schwann, and Cagniard-Latour, who worked on yeast twenty years before Pasteur and who built their own works on that of Lavoisier, whose work predated theirs by another fifty years and who likely built his work on the research of his contemporary, Fabbroni. Moreover, it took industry investment, government support, the advent of advanced microscopes, and eminent learned men rassling over its essence before yeast was eventually understood as it is now: a fundamental unit of life.

Wayne and Mara, too, are descendents of this yeast work and scientific struggle. But while Pasteur and his contemporaries’ research was directly inspired and validated by the use of yeast in brewing and baking, Wayne and Mara’s lab is not located in a vineyard. The MCISB yeast researchers work instead in a large white office building, in the middle of Manchester, where, using the tools of modern molecular biology, they probe, pull apart, and map yeast. The small-scale science of Pasteur’s time has grown big—the distance between research and application widening as science has professionalized and institutionalized over time. The vineyard has been replaced by complex configurations of university-based laboratories, specialized health research institutes, pharmaceutical companies, policymaking bodies, regulatory agencies, funding councils, etc.—within which, researchers of all types and stripes try to organize, mobilize, set up shop, and get to the bench or the computer.

Although the modeling of yeast is certainly related to application, insights about life derived from yeast-modeling will likely take some time to result in anything that concretely and directly helps to cure cancer—because the translation of research from lab bench to patient’s bedside is far from straightforward. Sure, these fundamental investigations into the nature of life bring new knowledge, but what, exactly, yeast will teach us and how that will translate into applications is still a little unclear. In other words, whether the promises of the research will become reality is unknown. This uncertainty is difficult to handle—not only for the scientists performing the experiments, building the computer models, and composing the grants, but also for the pharmaceutical industry representatives and government policymakers making funding decisions.

The fundamental character and exact function of yeast were not understood for a long time; now, we’re struggling to understand yeast’s systematic operations. How long will this struggle take? And do we really understand what’s at stake? Within the dilemma of finite funding resources, how do we figure out what research will eventually translate into practice? Pasteur understood the importance of these issues about the funding of research. In 1878, he wrote, “I beseech you to take interest in these sacred domains so expressively called laboratories. Ask that there be more…for these are the temples of the future, wealth, and well-being. It is here that humanity will grow, strengthen, and improve. Here, humanity will learn to read progress and individual harmony in the works of nature.”

In many ways, this is what our modern yeast devotees are also hoping to discover: not only how yeast may once again be that ideal lab partner and organism that is also key to the next frontier of science, but that our communities, our funding bodies, and our scientific institutions will continue to invest the needed time, infrastructure, and patience into working with yeast and awaiting the next level of discovery this little organism has to offer us.

Niki Vermeulen (nikivermeulen@gmail.com) is a Wellcome Research Fellow in the Center for the History of Science, Technology and Medicine of the University of Manchester (UK). Molly Bain (mollybain@gmail.com), a writer, teacher, and performer, is working on an MFA in nonfiction at the University of Pittsburgh.

Forum

Evidence-driven policy

In “Advancing Evidence-Based Policymaking to Solve Social Problems” (Issues, Fall 2013), Jeffrey B. Liebman has written an informative and thoughtful article on the potential contribution of empirical analysis to the formation of social policy. I particularly commend his recognition that society faces uncertainty when making policy choices and his acknowledgment that learning what works requires a willingness to try out policies that may not succeed.

He writes “If the government or a philanthropy funds 10 promising early childhood interventions and only one succeeds, and that one can be scaled nationwide, then the social benefits of the overall initiative will be immense.” He returns to reinforce this theme at the end of the article, writing “What is needed is a decade in which we make enough serious attempts at developing scalable solutions that, even if the majority of them fail, we still emerge with a set of proven solutions that work.”

Unfortunately, much policy analysis does not exhibit the caution that Liebman displays. My recent book Public Policy in an Uncertain World observes that analysts often suffer from incredible certitude. Exact predictions of policy outcomes are common, and expressions of uncertainty are rare. Yet predictions are often fragile, with conclusions resting on critical unsupported assumptions or on leaps of logic. Thus, the certitude that is frequently expressed in policy analysis often is not credible.

A disturbing feature of recent policy analysis is that many researchers overstate the informativeness of randomized experiments. It has become common to use two of the terms in the Liebman article—”evidence-based policymaking” and “rigorous evaluation methods”—as code words for such experiments. Randomized experiments sometimes enable one to draw credible policy-relevant conclusions. However, there has been a lamentable tendency of researchers to stress the strong internal validity of experiments and downplay the fact that they often have weak external validity. (An analysis is said to have internal validity if its findings about the study population are credible. It has external validity if one can credibly extrapolate the findings to the real policy problem of interest.)

5

Another manifestation of incredible certitude is that governments produce precise official forecasts of unknown accuracy. A leading case is Congressional Budget Office scoring of the federal debt implications of pending legislation. Scores are not accompanied by measures of uncertainty, even though legislation often proposes complex changes to federal law, whose budgetary implications must be difficult to foresee.

Why do policy analysts express certitude about policy impacts that, in fact, are rather difficult to assess? A proximate answer is that analysts respond to incentives. The scientific community rewards strong, novel findings. The public takes a similar stance, expecting unequivocal policy recommendations. These incentives make it tempting for researchers to maintain assumptions far stronger than they can persuasively defend, in order to draw strong conclusions.

We would be better off we were to face up to the uncertainties that attend policy formation. Some contentious policy debates stem from our failure to admit what we do not know. Credible analysis would make explicit the range of outcomes that a policy might realistically produce. We would do better to acknowledge that we have much to learn than to act as if we already know the truth.

CHARLES F. MANSKI

Board of Trustees Professor in Economics and Fellow of the Institute for Policy Research

Northwestern University

Evanston, Illinois

cfmanski@northwestern.edu

Manski is author of Public Policy in an Uncertain World: Analysis and Decisions (Harvard University Press, 2013).

Model behavior

With “When All Models Are Wrong” (Issues, Winter 2014), Andrea Saltelli and Silvio Funtowicz add to a growing literature of guidance on handling scientific evidence for scientists and policymakers; recent examples include Sutherland, Spiegelhalter, and Burgman’s “Policy: Twenty tips for interpreting scientific claims,” and Chris Tyler’s “Top 20 things scientists need to know about policymaking.” Their particular focus on models is timely as complex issues are of necessity being handled through modeling, prone though models and model users are to misuse and misinterpretation.

Saltelli and Funtowicz provide mercifully few (7, more memorable than 20) “rules,” sensibly presented more as guidance and, in their words, as an adjunct to essential critical vigilance. There is one significant omission; a rule 8 should be “Test models against data”! Rule 1 (clarity) is important in enabling others to understand and gain confidence in a model, although it risks leading to oversimplification; models are used because the world is complex. Rule 3 might more kindly be rephrased as “Detect overprecision”; labeling important economic studies such as the Stern review as “pseudoscience” seems harsh. Although studies of this type can be overoptimistic in terms of what can be said about the future, they can also represent an honest best attempt, within the current state of knowledge (hopefully better than guesswork), rather than a truly pseudoscientific attempt to cloak prejudice in scientific language. Perhaps also, the distinction between prediction and forecasting has not been recognized here; more could also have been made of the policy-valuable role of modeling in exploring scenarios. But these comments should not detract from a useful addition to current guidance.

Alice Aycock

Those visiting New York City’s Park Avenue through July 20th will experience a sort of “creative disruption.” Where one would expect to see only the usual mix of cars, tall buildings, and crowded sidewalks, there will also be larger-than-life white paper-like forms that seem to be blowing down the middle of the street, dancing and lurching in the wind. The sight has even slowed the pace of the city’s infamously harried residents, who cannot resist the invitation to stop and enjoy.

Alice Aycock’s series of seven enormous sculptures in painted aluminum and fiberglass is called “Park Avenue Paper Chase” and stretches from 52nd Street to 66th Street. The forms, inspired by spirals, whirlwinds, and spinning tops, are hardly the normal view on a busy city street. According to Aycock, “I tried to visualize the movement of wind energy as it flowed up and down the avenue creating random whirlpools, touching down here and there and sometimes forming dynamic three-dimensional massing of forms. The sculptural assemblages suggest waves, wind turbulence, turbines, and vortexes of energy…. Much of the energy of the city is invisible. It is the energy of thought and ideas colliding and being transmitted outward. The works are the metaphorical visual residue of the energy of New York City.”

Aycock’s work tends to draw from diverse subjects and ideas ranging from art history to scientific concepts (both current and outdated). The pieces in “Park Avenue Paper Chase” visually reference Russian constructivism while being informed by mathematical phenomena as found in wind and wave currents. Far from forming literal theoretical models, Aycock’s sculptures seem to combine seemingly disjointed ideas together intuitively into forms that make visual sense. Form combined with their placement on Park Avenue work together to disorient the viewer, at least temporarily, to capture the imagination and to challenge perceptions.

Aycock’s art career began in the early 1970s and has included installations at the Museum of Modern Art, San Francisco Art Institute, and the Museum of Contemporary Art, Chicago, as well as installations in many public spaces such as Dulles International Airport, the San Francisco Public Library, and John F. Kennedy International Airport.

JD Talasek

Images courtesy of the artist and Galerie Thomas Schulte and Fine Art Partners, Berlin, Germany. Photos by Dave Rittinger.

7

ALICE AYCOCK, Cyclone Twist (Park Avenue Paper Chase), Painted aluminum, 27′ high × 15′ diameter, Edition of 2, 2013. The sculpture is currently installed at 57th Street on Park Avenue.

8

ALICE AYCOCK, Hoop-La (Park Avenue Paper Chase), Painted aluminum and steel, 19′ high × 17′ wide × 24′ long, Edition of 2, 2014. The sculpture is currently installed at 53rd Street on Park Avenue.

It is interesting to consider why such guidance should be necessary at this time. The need emerges from the inadequacies of undergraduate science education, especially in Britain where school and undergraduate courses are so narrowly focused (unlike the continental baccalaureate which at least includes some philosophy). British undergraduates get little training in the philosophy and epistemology of science. We still produce scientists whose conceptions of “fact” and “truth” remain sturdily Logical Positivist, lacking understanding of the provisional, incomplete nature of scientific evidence. Likewise, teaching about the history and sociology of science is unusual. Few learn the skills of accurate scientific communication to nonscientists. These days, science students may learn about industrial applications of science, but few hear about its role in public policy. Many scientists (not just government advisers) appear to misunderstand the relationship between the conclusions they are entitled to draw about real-world problems and the wider issues involved in formulating and testing ideas about how to respond to them. Even respected scientists often put forward purely technocratic “solutions,” betraying ignorance of the social, economic, and ethical dimensions of problems, and thereby devaluing science advice in the eyes of the public and policymakers.

Saltelli and Funtowicz’ helpful checklist contributes to improving this situation, but we need to make radical improvements to the ways we train our young scientists if we are to bridge the science/policy divide more effectively.

MILES PARKER

Centre for Science and Policy

MIKE BITHELL

Department of Geography

University of Cambridge

Cambridge, UK

Wet drones

In “Sea Power in the Robotic Age” (Issues, Winter 2014), Bruce Berkowitz describes an impressive range of features and potential missions for unmanned maritime systems (UMSs). Although he’s rightly concerned with autonomy in UMSs as an ethical and legal issue, most of the global attention has been on autonomy in unmanned aerial vehicles (UAVs). Here’s why we may be focusing on the wrong robots.

The need for autonomy is much more critical for UMSs. UAVs can communicate easily with satellites and ground stations to receive their orders, but it is notoriously difficult to broadcast most communication signals through liquid water. If unmanned underwater vehicles (UUVs), such as robot submarines, need to surface in order to make a communication link, they will give away their position and lose their stealth advantage. Even unmanned surface vehicles (USVs), or robot boats, that already operate above water face greater challenges than UAVs, such as limited line-of-sight control because of a two-dimensional operating plane, heavy marine weather that can interfere with sensing and communications, more obstacles on the water than in the air, and so on.

All this means that there is a compelling need for autonomy in UMSs, more so than in UAVs. And that’s why truly autonomous capabilities will probably emerge first in UMSs. Oceans and seas also are much less active environments than land or air: There are far fewer noncombatants to avoid underwater. Any unknown submarine, for instance, can reasonably be presumed not to be a recreational vehicle operated by an innocent individual. So UMSs don’t need to worry as much about the very difficult issue of distinguishing lawful targets from unlawful ones, unlike the highly dynamic environments in which UAVs and unmanned ground vehicles (UGVs) operate.

Therefore, there are also lower barriers to deploying autonomous systems in the water than in any other battlespace on Earth. Because the marine environment makes up about 70% of Earth’s surface, it makes sense for militaries to develop UMSs. Conflicts are predicted to increase there, for instance, as Arctic ice melts and opens up strategic shipping lanes that nations will compete for.

Of course, UAVs have been getting the lion’s share of global attention. The aftermath images of UAV strikes are violent and visceral. UAVs tend to have sexy/scary names such as Ion Tiger, Banshee, Panther, and Switchblade, while UMSs have more staid and nondescript names such as Seahorse, Scout, Sapphire, and HAUV-3. UUVs also mostly look like standard torpedoes, in contrast to the more foreboding and futuristic (and therefore interesting) profiles of Predator and Reaper UAVs.

For those and other reasons, UMSs have mostly been under the radar in ethics and law. Yet, as Berkowitz suggests, it would benefit both the defense and global communities to address ethics and law issues in this area in advance of an international incident or public outrage—a key lesson from the current backlash against UAVs. Some organizations, such as the Naval Postgraduate School’s CRUSER consortium, are looking at both applications and risk, and we would all do well to support that research.

PATRICK LIN

Visiting Associate Professor

School of Engineering

Stanford University

Stanford, California

Director and Associate Philosophy Professor

Ethics and Emerging Sciences Group

California Polytechnic State University

San Luis Obispo, California

palin@calpoly.edu

Robots aren’t taking your job

Perhaps a better title for “Anticipating a Luddite Revival” (Issues, Spring 2014) might be “Encouraging a Luddite Revival,” for Stuart Elliot significantly overstates the ability of information technology (IT) innovations to automate work. By arguing that as many as 80% of jobs will be eliminated by technology in as soon as two decades, Elliot is inflaming Luddite opposition.

Elliot does attempt to be scholarly in his methodology to predict the scope of technologically based automation. His review of past issues of IT scholarly journals attempts to understand tech trends, while his analysis of occupation skills data (O-NET) attempts to assess what occupations are amenable to automation.

But his analysis is faulty on several levels. First, to say that a software program might be able to mimic some human work functions (e.g., finding words in a text) is completely different than saying that the software can completely replace a job. Many information-based jobs involve a mix of both routine and nonroutine tasks, and although software-enabled tools might be able to help with routine tasks, they have a much harder time with the nonroutine ones.

Second, many jobs are not information-based but involve personal services, and notwithstanding progress in robotics, we are a long, long way away from robots substituting for humans in this area. Robots are not going to drive the fire truck to your house and put out a fire anytime soon.

10

ALICE AYCOCK, Spin-the-Spin (Park Avenue Paper Chase), Painted aluminum, 18′ high × 15′ wide × 20′ long, Edition of 2, 2014. The sculpture is currently installed at 55th Street on Park Avenue.

Moreover, although it’s easy to say that the middle level O-NET tasks “appear to be roughly comparable to the types of tasks now being described in the research literature,” it’s quite another to give actual examples, other than some frequently cited ones such as software-enabled insurance underwriting. In fact, the problem with virtually all of the “robots are taking our jobs” claims is that they suffer from the fallacy of composition. Proponents look at the jobs that are relatively easy to automate (e.g., travel agents) and assume that: (1) these jobs will all be automated quickly, and (2) all or most jobs fit into this category. Neither is true. We still have over half a million bank tellers (with the Bureau of Labor Statistics predicting an increase in the next 10 years), long after the introduction of ATMs. Moreover, most jobs are actually quite hard to automate, such as maintenance and repair workers, massage therapists, cooks, executives, social workers, nursing home aides, and sales reps, to list just a few.

I am somewhat optimistic that this vision of massive automation may in fact come true perhaps by the end of the century, for it would bring increases in living standards (with no change in unemployment rates). But there is little evidence for Elliot’s claim of “a massive transformation in the labor market over the next few decades.” In fact the odds are much higher that U.S. labor productivity growth will clock in well below 3% per year (the highest rate of productivity the United States ever achieved).

ROBERT ATKINSON

President

Information Technology and Innovation Foundation

Washington, DC

ratkinson@itif.org

Climate change on the right

In Washington, every cause becomes a conduit for special-interest solicitation. Causes that demand greater transfers of wealth and power attract more special interests. When these believers of convenience successfully append themselves to the original cause, it compounds and extends the political support. When it comes to loading up a bill this way, existential causes are the best of all and rightfully should be viewed with greatest skepticism. As Steven E. Hayward notes in “Conservatism and Climate Science” (Issues, Spring 2014), the Waxman-Markey bill was a classic example of special-interest politics run amok.

So conservatives are less skeptical about science than they are about scientific justifications for wealth transfers and losses of liberty. Indeed, Yale professor Dan Kahan found, to his surprise, that self-identified Tea Party members scored better than the population average on a standard test of scientific literacy. Climate policy right-fully elicits skepticism from conservatives, although the skepticism is often presented as anti-science.

Climate activists have successfully and thoroughly confused the climate policy debate. They present the argument this way: (1) Carbon dioxide is a greenhouse gas emitted by human activity; (2) human emissions of carbon dioxide will, without question, lead to environmental disasters of unbearable magnitude; and (3) our carbon policy will effectively mitigate these disasters. The implication swallowed by nearly the entire popular press is that point one (which is true) proves points two and three.

In reality, the connections between points one and two and between points two and three are chains made up of very weak links. The science is so unsettled that even the Intergovernmental Panel on Climate Change (IPCC) cannot choose from among the scores of models it uses to project warming. It hardly matters; the accelerating warming trends that all of them predict are not present in the data (in fact the trend has gone flat for 15 years), nor do the data show any increase in extreme weather from the modest warming of the past century. This provokes the IPCC to argue that the models have not been proven wrong (because their projections are so foggy as to include possible decades of cooling) and that with certain assumptions, some of them predict really bad outcomes.

Not wanting to incur trillions of dollars of economic damage based on these models is not anti-science, which brings us to point three.

Virtually everyone agrees that none of the carbon policies offered to date will have more than a trivial impact on world temperature, even if the worst-case scenarios prove true. So the argument for the policies degenerates to a world of tipping points and climate roulette wheels—there is a chance that this small change will occur at a critical tipping point. That is, the trillions we spend might remove the straw that would break the back of the camel carrying the most valuable cargo. With any other straw or any other camel there would be no impact.

So however unscientific it may seem in the contrived all-or-none climate debate, conservatives are on solid ground to be skeptical.

DAVID W. KREUTZER

Research Fellow in Energy Economics and Climate Change

The Heritage Foundation

Washington, DC

David.kreutzer@heritage.org

Steven E. Hayward claims that the best framework for addressing large-scale disruptions, including climate change, is building adaptive resiliency. If so, why does he not present some examples of what he has in mind, after dismissing building seawalls, moving elsewhere, or installing more air conditioners as defeatist? What is truly defeatist is prioritizing adaptation over prevention, i.e., the reduction of greenhouse gas emissions.

Others concerned with climate change have a different view. As economist William Nordhaus has pointed out (The Climate Casino, Yale University Press, 2013), in areas heavily managed by humans, such as health care and agriculture, adaptation can be effective and is necessary, but some of the most serious dangers, such as ocean acidification and losses of biodiversity, are unmanageable and require mitigation of emissions if humanity is to avoid catastrophe. This two-pronged response combines cutting back emissions with reactively adapting to those we fail to cut back.

Hayward does admit that our capacity to respond to likely “tipping points” is doubtful. Why then does he not see that mitigation is vital and must be pursued far more vigorously than in the past? Nordhaus has estimated that the cost of not exceeding a temperature increase of 2°C might be 1 to 2% of world income if worldwide cooperation could be assured. Surely that is not too high a price for insuring the continuance of human society as we know it!

11

ALICE AYCOCK, Twin Vortexes (Park Avenue Paper Chase), Painted aluminum, 12′ high × 12′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 54th Street on Park Avenue.

12

ALICE AYCOCK, Maelstrom (Park Avenue Paper Chase), Painted aluminum 12′ high × 16′ wide × 67′ long, Edition of 2, 2014. The sculpture is currently installed between 52nd and 53rd Streets on Park Avenue. Detail opposite.

Hayward states that “Conservative skepticism is less about science per se than its claims to usefulness in the policy realm.” But climate change is a policy issue that science has placed before the nations of the world, and science clearly has a useful role in the policy response, both through the technologies of emissions control and by adaptive agriculture and public health measures. To rely chiefly on “adaptive resiliency” and not have a major role for emissions control is to tie one hand behind one’s back.

EVILLE GORHAM

Regents’ Professor of Ecology Emeritus

University of Minnesota

Minneapolis, Minnesota

Steven E. Hayward should be commended for his thoughtful article, in which he explains why political conservatives do not want to confront the challenge of climate change. Nevertheless, the article did not increase my sympathy for the conservative position, and I would like to explain why.

Hayward begins by explaining why appeals to scientific authority alienate conservatives. Science is not an endeavor that anyone must accept on the word of authority. People should feel free to examine and question scientific work and results. But it doesn’t make sense to criticize science without making an effort to thoroughly understand the science first: the hypotheses together with the experiments that attempt to prove them. What too many conservatives do is deny the science out of hand without understanding it well, dismissing it because of a few superficial objections. I read of one skeptic who dismissed global warming because water vapor is a more powerful greenhouse gas than carbon dioxide. That’s true, but someone who thinks through the argument will understand why that doesn’t make carbon dioxide emissions less of a problem. Climate change is a challenge that we may not agree on how to confront, but that doesn’t excuse any of us from thinking it through carefully.

Hayward points out that “the climate enterprise is the largest crossroads of physical and social science ever contemplated.” That may be true, but conservatives don’t separate the two, and they should. If the science is wrong, they need to explain how the data is flawed, or the theory has not taken into account all the variables, or the statistical analysis is incorrect, or that the data admits of more than one interpretation. If the policy prescriptions are wrong, then they need to explain why these prescriptions will not obtain the results we seek or how they will cost more than the benefits they will provide. Then they need to come up with better alternatives. But too many conservatives don’t separate the science from policy, they conflate the two together. They accuse the scientists of being liberals, and then they won’t consider either the science or the policy. That’s just wrong.

Hayward further explains that conservatives “doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power.” I agree with that, but that shouldn’t stop us from trying to prevent serious problems. Hayward’s statement is a powerful argument for caution, but policy often has unintended consequences, and when we’re faced with a threat, we act. We didn’t understand all consequences of entering World War II, building the atomic bomb, passing the Civil Rights Act, inventing Social Security, or going to war in Afghanistan, but we did them because we thought we had to. Then we dealt with the consequences as best we could. Climate change should be no different.

The weakest part of Hayward’s article is his charge that “the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction.” Now I’m not sure that scientists are the monolithic block Hayward makes then out to be (can he point to a poll?). But even if it is true, it is entirely irrelevant. Scientific work always deserves to be evaluated on its own merits, regardless of whatever personal leanings the investigators might have. Good scientific work is objective and verifiable, and if the investigators are allowing their work to be influenced by their personal biases, that should come out in review, especially if many scientific studies of the same phenomenon are being evaluated. The political leanings of the investigators are a very bad reason for ignoring their work.

13

Just a couple of other points. Hayward states that “Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution to be attacked with the typical emissions control policies,” but it is hard to see how the problem of greenhouse gas concentrations in the atmosphere can be resolved any other way. We can’t get a handle on global warming unless we find a way to limit emissions of greenhouse gases (or counterbalance the emissions with sequestration, which will take just as much effort). Emissions control is not just a tactic, it is a central goal, just like fighting terrorism and curing cancer are central goals. We might fail to achieve them, but that shouldn’t happen because of lack of trying. We need to be patient and persevere. If we environmentalists are correct, the evidence will mount, and public opinion will eventually side with us. By beginning to work on emissions control now, we will all be in a better position to move quickly when the political winds shift in our favor.

Hayward’s alternative to an aggressive climate policy is what he calls “building adaptive resiliency,” but he is very vague about what that means. Does he mean that individuals and companies should adapt to climate change on their own, or that governments need to promote resiliency; but if so, how? The point of environmentalists is that even if we are able to adapt to climate change without large loss of life and property, it will be far more expensive then than if we take direct measures to confront the source of the problem—carbon emissions—now. And we really don’t have much time. If the climate scientists are correct, we have only 50 to 100 years before some of the worst effects of climate change start hitting us. Considering the size and complexity of the problem and the degree of cooperation that any serious effort to address climate change will require from all levels of governments, companies, and private individuals, that’s not a lot of time. We had better get moving.

14

ALICE AYCOCK, Waltzing Matilda (Park Avenue Paper Chase), Reinforced fiberglass, 15′ high × 15′ wide × 18′ long, Edition of 2, 2014. The sculpture is currently installed at 56th Street on Park Avenue.

Hayward earns our gratitude for helping us better understand how conservatives feel on this important issue. Nevertheless, the conservative movement is full of bright and intelligent people who could be making many valuable contributions and ideas to the climate debate, and they’re not. That’s a real shame.

MICHAEL H. KLEIN

Brooklyn, New York

mhkblogs@gmail.com

Does U.S. science still rule?

In “Is U.S. Science in Decline?” (Issues, Spring 2014), Yu Xie offers a glimpse into the plight of early-career scientists. The gravity of the situation cannot be understated. Many young researchers have become increasingly disillusioned and frustrated about their career trajectory because of declining federal support for basic scientific research.

Apprehension among early-career scientists is rooted in the current fiscal environment. In fiscal year 2013, the National Institutes of Health (NIH) funded fewer than 700 grants due to sequestration: the across-the-board spending cuts that remain an albatross for the entire research community. To put this into context, the success rate for grant applications is now one out of six and may worsen if sequestration is not eliminated. This has left many young researchers rethinking their career prospects. In 1980, close to 18% of all principal investigators (PIs) were age 36 and under, and the percentage has fallen to about 3% in recent years. NIH Director Francis Collins, has said that the federal funding climate for research “keeps me awake at night,” and echoed this sentiment at a recent congressional hearing: “I am worried that the current financial squeeze is putting them [early-career scientists] in particular jeopardy in trying to get their labs started, in trying to take on things that are innovative and risky.” Samantha White, a public policy fellow for Research!America, a nonprofit advocacy alliance, sums up her former career path as a researcher in two words: “anxiety-provoking.” She left bench work temporarily to support research in the policy arena, describing to lawmakers the importance of a strong investment in basic research.

The funding squeeze has left scientists with limited resources and many of them, like White, pursuing other avenues. More than half of academic faculty members and PIs say they have turned away promising new researchers since 2010 because of the minimal growth of the federal science agencies’ budgets, and nearly 80% of scientists say they spend more time writing grant applications, according to a survey by the American Society for Biochemistry and Molecular Biology. Collins lamented this fact in a USA Today article: “We are throwing away probably half of the innovative, talented research proposals that the nation’s finest biomedical community has produced,” he said. “Particularly for young scientists, they are now beginning to wonder if they are in the wrong field. We have a serious risk of losing the most important resource that we have, which is this brain trust, the talent and the creative energies of this generation of scientists.”

U.S. Nobel laureates relied on government funding early in their careers to advance research that helped us gain a better understanding of how to treat and prevent deadly diseases. We could be squandering opportunities for the next generation of U.S. laureates if policymakers fail to make a stronger investment in medical research and innovation.

MIKE COBURN

Chief Operating Officer

Research!America

Alexandria, Virginia

www.researchamerica.org

@ResearchAmerica

Chinese aspirations

Junbo Yu’s article “The Politics Behind China’s Quest for Nobel Prizes” (Issues, Spring 2014) tells an interesting story about how China is applying its strategy for winning Olympic gold to science policy. The story might fit well with the Western stereotype of the Communist bureaucrats, but the real politics of it are more complex and nuanced.

First of all, let’s get the story straight. The article refers to a recent “10,000 Talent” program run by the organizational department of the Chinese Communist Party. It is a major talent development program aimed at selecting and supporting domestic talents in various areas, including scientists, young scholars, entrepreneurs, best teachers, and skilled engineers. The six scientists referred to in Yu’s article were among the 277 people who were identified as the first cohort of the talents. Although there were indeed media reports referring to these scientists as candidates to charge for Nobel Prizes, they were quickly dispelled as media hype and misunderstanding by relevant officials. For example, three of the first six scientists were in research areas that have no relevance to Nobel Prizes at all.

The real political issue is how to balance between talent trained overseas and talent trained domestically. In 2008, China initiated a “1,000 Talent” program aimed at attracting highly skilled Chinese living overseas to return to China. It was estimated that between 1978 and 2011, more than 2 million Chinese students went abroad to study and that only 36.5% of them returned. Although the 1,000 Talent program has been successful in attracting outstanding scholars back to China, it has also generated some unintended consequences.

As part of the recruitment package, the program will give each returnee a one million RMB (equivalent to $160,000) settlement payment. Many of them can also get special grants for research and a salary comparable to what they were paid overseas. This preferential treatment has generated some concern and resentment among those who were trained domestically. They have to compete hard for research grants, and their salaries are ridiculously low. In an Internet survey conducted in China by Yale University, many people expressed their support for the government to attract people to come back from overseas, but felt it was unfair to give people benefits based on where they were trained rather than on how they perform.

In response to these criticisms and concerns, the 10,000 Talent program was developed as a way to focus on domestically trained talent. Instead of going through a lengthy selection process, the program tried to integrate various existing talent programs run by various government agencies.

Although these programs might be useful in the short run, the best way to attract and keep talented people is to create an open, fair, and nurturing environment for people who love research, and to pay them adequately so that they can have a decent life. It is simple and doable in China now, and in the long run it will be much more effective than the 1,000 Talent and 10,000 Talent programs.

LAN XUE

Professor and Dean

School of Public Policy and Management

Tsinghua University

Beijing, China

xue.lan@tsinghua.edu.cn

The idea of China winning a Nobel Prize in science may seem like a stretch to many who understand the critical success factors that drive world-class research at the scientific frontier. Although new reforms in the science and technology (S&T) sector have been introduced since September 2012, the Chinese R&D system continues to be beset by many deep-seated organizational and management issues that need to be overcome if real progress is to be possible. Nonetheless, Junbo Yu’s article reminds us that sometimes there is more to the scientific endeavor than just the work of a select number of scientists toiling away in some well-equipped laboratory.

If we take into account the full array of drivers underlying China’s desire to have a native son win one of these prestigious prizes, we must place national will and determination at the top of the key factors that will determine Chinese success. Yu’s analysis helps remind us just how important national prestige and pride are as factors motivating the behavior of the People’s Republic of China’s leaders in terms of investment and the commitment of financial resources. At times, I wonder whether we here in the United States should pay a bit more deference to these normative imperatives. In a world where competition has become more intense and the asymmetries of the past are giving way to greater parity in many S&T fields, becoming excited about the idea of “winning” or forging a sense of “national purpose” may not be as distorted as perhaps suggested in the article. Too many Americans take for granted the nation’s continued dominance in scientific and technological affairs, when all of the signals are pointing in the opposite direction. In sports, we applaud the team that is able to muster the team spirit and determination to carve out a key victory. Why not in S&T?

That said, where the Chinese leadership may have gone astray in its some-what overheated enthusiasm for securing a Chinese Nobel Prize is its failure to recognize that globalization of the innovation process has made the so-called “scientific Lone Ranger” an obsolete idea. Most innovation efforts today are both transnational and collaborative in nature. China’s future success in terms of S&T advancement will be just as dependent on China’s ability to become a more collaborative nation as it will on its own home-grown efforts.

Certainly, strengthening indigenous innovation in China is an appropriate national objective, but as the landscape of global innovation continues to shift away from individual nation-states and more in the direction of cross-border, cross-functional networks of R&D cooperation, the path to the Nobel Prize for China may be in a different direction than China seems to have chosen. Remaining highly globally engaged and firmly embedded in the norms and values that drive successful collaborative outcomes will prove to be a faster path to the Nobel Prize for Chinese scientists than will working largely from a narrow national perspective. And it also may be the best path for raising the stature and enhancing the credibility of the current regime on the international stage.

DENIS SIMON

Senior Adviser for China & Global Affairs

Foundation Professor of Contemporary

Chinese Affairs

Arizona State University

Tempe, Arizona

denis.simon@asu.edu

Junbo Yu raises a number of interesting, but complex, questions about the current state of science, and science policy, in China. As a reflection of a broad cultural nationalism, many Chinese see the quest for Nobel Prizes in science and medicine as a worthy major national project. For a regime seeking to enhance legitimacy through appeals to nationalism, the use of policy tools by the Party/state to promote this quest is understandable. Although understandable, it may also be misguided. China has many bright, productive scientists who, in spite of the problems of China’s research culture noted by Yu, are capable of Nobel-quality work. They will be recognized with prizes sooner or later, but this will result from the qualities of mind and habit of individual researchers, not national strategy.

The focus on Nobel Prizes detracts from broader questions about scientific development in 21st century China involving tensions between principles of scientific universalism and the social and cultural “shaping” of science and technology in the Chinese setting. The rapid enhancement of China’s scientific and technological capabilities in recent years has occurred in a context where many of the internationally accepted norms of scientific practice have not always been observed. Nevertheless, through international benchmarking, serious science planning, centralized resource mobilization, the abundance of scientific labor available for research services, and other factors, much progress has been made by following a distinctive “Chinese way” of scientific and technological development. The sustainability of this Chinese way, however, is now at issue, as is its normative power for others

Over the past three decades, China has faced a challenge of ensuring that policy and institutional design are kept in phase with a rapidly changing innovation system. Overall, policy adjustments and institutional innovations have been quite successful in allowing China to pass through a series of catch-up stages. However, the challenge of moving beyond catch-up now looms large, especially with regard to the development of policies and institutions to support world class basic research, as Yu suggests. Misapprehension in the minds of political leaders and bureaucrats about the nature of research and innovation in the 21st century may also add to the challenge. The common conflation of “science” and “technology” in policy discourse, as seen in the Chinese term keji (best translated as “scitech”), is indicative. So too is the belief that scientific and technological development remains in essence a national project, mainly serving national political needs, including ultimately national pride and Party legitimacy, as Yu points out.

17

ALICE AYCOCK, Twister 12 feet (Park Avenue Paper Chase), Aluminum, 12′ high × 12′ diameter, Unique edition, 2014. The sculpture is currently installed at 66th Street on Park Avenue.

In 2006, China launched its 15-year “Medium to Long-Term Plan for Scientific and Technological Development” (MLP). Over the past year, the Ministry of Science and Technology has been conducting an extensive midterm evaluation of the Plan. At the same time, as recognized in the ambitious reform agenda of the new Xi Jingping government, the need for significant reforms in the nation’s innovation system, largely overlooked in 2006, have become more evident. There is thus a certain disconnect between the significant resource commitments entailed in the launching of the ambitious MLP and the reality that many of the institutions required for the successful implementation of the plan may not be suitable to the task. The fact that many of the policy assumptions about the role of government in the innovation system that prevailed in 2006 seemingly are not shared by the current government suggests that the politics of Chinese science involve much more than the Nobel Prize quest.

RICHARD P. (PETE) SUTTMEIER

Professor of Political Science, Emeritus

University of Oregon

Eugene, Oregon

petesutt@uoregon.edu

Although it is intriguing in linking the production of a homegrown Nobel science laureate to the legitimacy of the Chinese Communist Party, Junbo Yu’s piece just recasts what I indicated 10 years ago. In a paper entitled “Chinese Science and the ‘Nobel Prize Complex’,” published in Minerva in 2004, I argued that China’s enthusiasm for a Nobel Prize in science since the turn of the century reflects the motivations of China’s political as well as scientific leadership. But “various measures have failed to bring home those who are of the calibre needed to win the Nobel Prize. Yet, unless this happens, it will be a serious blow to China’s political leadership. …So to win a ‘home-grown’ Nobel Prize becomes a face-saving gesture.” “This Nobel-driven enthusiasm has also become part of China’s resurgent nationalism, as with winning the right to host the Olympics,” an analogy also alluded to by Yu.

In a follow-up, “The Universal Values of Science and China’s Nobel Prize Pursuit,” forthcoming again in Minerva, I point out that in China, “science, including the pursuit of the Nobel Prize, is more a pragmatic means to achieve the ends of the political leadership—the national pride in this case—than an institution laden with values that govern its practices.”

As we know, in rewarding those who confer the “greatest benefit on mankind,” the Nobel Prize in science embodies an appreciation and celebration of not merely breakthroughs, discoveries, and creativity, but a universal set of values that are shared and practiced by scientists regardless of nationality or culture.

These core values of truth-seeking, integrity, intellectual curiosity, the challenging of authority, and above all, freedom of inquiry are shared by scientists all over the world. It is recognition of these values that could lead to the findings that may one day land their finders a Nobel Prize.

China’s embrace of science dates back only to the May Fourth Demonstrations in 1919, when scholars, disillusioned with the direction of the new Chinese republic after the fall of the Qing Dynasty, called for a move away from traditional Chinese culture to Western ideals; or as they termed it, a rejection of Mr. Confucius and the acceptance of Mr. Science and Mr. Democracy.

However, these concepts of science and democracy differed markedly from those advocated in the West and were used primarily as vehicles to attack Confucianism. The science championed during the May Fourth move-ment was celebrated not for its Enlightenment values but for its pragmatism, its usefulness.

Francis Bacon’s maxim that “knowledge is power” ran right through Mao Zedong’s view of science after the founding of the People’s Republic in 1949. Science and technology were considered as integral components of nation-building; leading academics contributed their knowledge for the sole purpose of modernizing industry, agriculture, and national defense.

The notion of saving the nation through science during the Nationalist regime has translated into current Communist government policies of “revitalizing the nation with science, technology, and education” and “strengthening the nation through talent.” A recent report by the innovation-promotion organization Nesta characterized China as “an absorptive state,” adding practical value to existing foreign technologies rather than creating new technologies of its own.

This materialistic emphasis reflects the use of science as a means to a political end to make China powerful and prosperous. Rather than arbitrarily picking possible Nobel Prize winners, the Chinese leadership would do well to apply the core values of science to the nurturing of its next generation of scientists. Only when it abandons cold-blooded pragmatism for a value-driven approach to science can it hope to win a coveted Nobel Prize and ascend to real superpower status.

Also, winning a Nobel Prize is completely different from winning a gold medal at the Olympics. Until the creation of an environment conducive to first-rate research and nurturing talent, which cannot be achieved through top-down planning, mobilization, and concentration of resources (the hall-marks of China’s state-sponsored sports program), this Nobel pursuit will continue to vex the Chinese for many years to come.

CONG CAO

Associate Professor and Reader

School of Contemporary Chinese Studies

University of Nottingham

Nottingham, UK

cong.cao@nottingham.ac.uk

The New Visible Hand: Understanding Today’s R&D Management

Perspectives: Rethinking “Science” Communication

CRAIG BOARDMAN

The New Visible Hand: Understanding Today’s R&D Management

Recent decades have seen dramatic if not revolutionary changes in the organization and management of knowledge creation and technology development in U.S. universities. Market demands and public values conjointly influence and in many cases supersede the disciplinary interests of academic researchers in guiding scientific and technological inquiry toward social and economic ends. The nation is developing new institutions to convene diverse sets of actors, including scientists and engineers from different disciplines, institutions, and economic sectors, to focus attention and resources on scientific and technological innovation (STI). These new institutions have materialized in a number of organizational forms, including but not limited to national technology initiatives, science parks, technology incubators, cooperative research centers, proof-of-concept centers, innovation networks, and any number of what the innovation ecosystems literature refers to generically (and in most cases secondarily) as “bridging institutions.”

The proliferation of bridging institutions on U.S. campuses has been met with a somewhat bifurcated response. Critics worry that this new purpose will detract from the educational mission of universities; advocates see an opportunity for universities to make an additional contribution to the nation’s well-being. The evidence so far indicates that bridging institutions on U.S. campuses have not diminished either the educational or knowledge-creation activities. Bridging institutions on U.S. campuses complement rather than substitute for traditional university missions and over time may prove critical pivot points in the U.S. innovation ecosystem.

The growth of bridging institutions is a manifestation of two larger societal trends. The first is that the source of U.S. global competitive advantage in STI is moving away from a simple superiority in certain types of R&D to a need to effectively and strategically manage the output of R&D and integrate it more rapidly into the economy through bridging institutions. The second is the need to move beyond the perennial research policy question of whether or not the STI process is linear, to tackle the more complex problem of how to manage the interweaving of all aspects of STI.

The visible hand

This article’s title harkens back to Alfred Chandler’s landmark book The Visible Hand: The Managerial Revolution in U.S. Business. In that book, Chandler makes the case that the proliferation of the modern multiunit business enterprise was an institutional response to the rapid pace of technological innovation that came with industrialization and increased consumer demand. For Chandler, what was revolutionary was the emergence of management as a key factor of production for U.S. businesses.

Similarly, the proliferation of bridging institutions on U.S. campuses has been an institutional response to the increasing complexity of STI and also to public demand for problem-focused R&D with tangible returns on public research investments. As a result, U.S. departments and agencies supporting intramural and extramural R&D are now very much focused on establishing bridging institutions—and in the case of proof-of-concept centers, bridging institutions for bridging institutions—involving experts from numerous scientific and engineering disciplines from academia, business, and government.

All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

To name just a few, the National Science Foundation (NSF) has created multiple cooperative research center programs and recently added the I-Corps program for establishing regional networks for STI. The Department of Energy (DOE) has its Energy Frontier Research Centers and Energy Innovation Hubs. The National Institutes of Health (NIH) have Translational Research Centers and also what they refer to as “team science.” The Obama administration has its Institutes for Manufacturing Innovation. But this is only a tiny sample. The Research Centers Directory counts more than 8,000 organized research units for STI in the United States and Canada, and over 16,000 worldwide. This total includes many traditional departmental labs, where management is not as critical a factor, but a very large number are bridging institutions created to address management concerns.

The analogy between Chandler’s observations about U.S. business practices and the proliferation of bridging institutions on U.S. campuses is not perfect. Whereas Chandler’s emphasis on management in business had more to do with the efficient production and distribution of routine and standard consumer goods and services, the proliferation of bridging institutions on U.S. campuses has had more to do with effective and commercially viable (versus efficient) knowledge creation and technology development, which cannot be routinized by way of management in the same way as can, say, automobile manufacturing.

Nevertheless, management—albeit a less formal kind of management than that Chandler examines—is now undeniably a key factor of production for STI on U.S. campuses. Many nations are catching up with the United States in the percentage of their gross domestic product devoted to R&D, so that R&D alone will not be sufficient to sustain U.S. leadership. The promotion of organizational cultures enabling bridging institutions to strategically manage social network ties among diverse sets of scientists and engineers toward coordinated problem-solving is what will help the United States maintain global competitive advantage in STI.

Historically, U.S. research policy has focused on two things with regard to universities to help ensure the U.S. status as the global STI hegemon. First, it has made sure that U.S. universities have had all the usual “factors of production” for STI, e.g., funding, technology, critical materials, infrastructure, and the best and the brightest in terms of human capital. Second, U.S. research policy has encouraged university R&D in applied fields by, for example, allowing universities to obtain intellectual property rights emerging from publicly funded R&D. In the past, then, an underlying assumption of U.S. research policy was that universities are capable of and willing to conduct problem-focused R&D and to bring the fruits of that research to market if given the funds and capital to do the R&D, as well as ownership of any commercial outputs.

But U.S. research policy regarding universities has been imitated abroad, and for this reason, among others, many countries have closed the STI gap with the United States, at least in particular technology areas. One need read only one or both of the National Academies’ Gathering Storm volumes to learn that the U.S. is now on a more level playing field with China, Japan, South Korea, and the European Union in terms of R&D spending in universities, academic publications and publication quality, academic patents and patent quality, doctorate production, and market share in particular technology areas. Quibbles with the evidentiary bases of the Gathering Storm volumes notwithstanding, there is little arguing that the United States faces increased competition in STI from abroad.

Although the usual factors of production for STI and property rights should remain components of U.S. research policy, these are no longer adequate to sustain U.S. competitive advantage. Current and future U.S. research policy for universities must emphasize factors of production for STI that are less easily imitated, namely organizational cultures in bridging institutions that are conducive to coordinated problem-solving. An underlying assumption of U.S. research should be that universities for the most part cannot or will not go it alone commercially even if given the funds, capital, and property rights to do so (there are exceptions, of course), but rather that they are more likely to navigate the “valley of death” in conjunction with businesses, government, and other universities.

Encouraging cross-sector, inter-institutional R&D in the national interest must become a major component of U.S. research policy for universities, and bridging institutions must play a central role. Anecdotal reports suggest that bridging institutions differ widely in their effectiveness, but one of the challenges facing the nation is to better understand the role that management plays in the success of bridging institutions. Calling something a bridging institution does not guarantee that it will make a significant contribution to meeting STI goals.

The edge of the future

The difference between the historic factors of production for STI discussed above and organizational cultures in bridging institutions is that the former are static, simple, and easy to imitate, whereas the latter are dynamic, complex, and difficult to observe, much less copy. This is no original insight. The business literature made this case originally in the 1980s and 1990s. A firm’s intangible assets, its organizational culture and the tacit norms and expectations for organizational behavior that this entails, can be and oftentimes are a source of competitive advantage because they are difficult to measure and thus hard for competing firms to emulate.

University leaders and scholars have recognized that bridging institutions on U.S. campuses can be challenging to organize and manage and that the ingredients for an effective organizational culture are still a mystery. There is probably as much literature on the management challenges of bridging institutions as there is on their performance. Whereas the management of university faculty in traditional academic departments is commonly referred to as “herding cats,” coordinating faculty from different disciplines and universities, over whom bridging institutions have no line authority, to work together and also to cooperate with industry and government is akin to herding feral cats.

But beyond this we know next to nothing about the organizational cultures of bridging institutions. The cooperative research centers and other types of bridging institutions established by the NSF, DOE, NIH, and other agencies are most often evaluated for their knowledge and technology outcomes and, increasingly, for their social and economic impact, but seldom have research and evaluation focused on what’s inside the black box. All we know for certain is that some bridging institutions on U.S. campuses are wildly successful and others are not, with little systematic explanation as to why.

Developing an understanding of organizational cultures in bridging institutions is important not just because these can be relatively tacit and difficult to imitate, but additionally because other, more formal aspects of the management of bridging institutions are less manipulable. Unlike Chandler’s emphasis on formal structures and authorities in U.S. businesses, bridging institutions do not have many layers of hierarchy, nor do they have centralized decisionmaking. As organizations focused on new knowledge creation and technology development, bridging institutions typically are flat and decentralized, and therefore vary much more culturally and informally than structurally.

There are frameworks for deducing the organizational cultures of bridging institutions. One is the competing values framework developed by Kim Cameron and Robert Quinn. Another is organizational economics’ emphasis on informal mechanisms such as resource interdependencies and goal congruence. A third framework is the organizational capital approach from strategic human resources management. These frameworks have been applied in the business literature to explore the differences between Silicon Valley and Route 128 microcomputer companies, and they can be adapted for use in comparing the less formal structures of bridging institutions.

What’s more, U.S. research policy must take into account how organizational cultures in bridging institutions interact with “best practices.” We know that in some instances, specific formalized practices are associated with successful STI in bridging institutions, but in many other cases, these same practices are followed in unsuccessful institutions. For bridging institutions, best practices may be best only in combination with particular types of organizational culture.

Inside the black box

The overarching question that research policy scholars and practitioners should address is what organizational cultures lead to different types of STI in different types of bridging institutions. Most research on bridging institutions emphasizes management challenges and best practices, and the literature on organizational culture is limited. We need to address in a systematic fashion how organizational cultures operate to coordinate diverse sets of scientists and engineers toward coordinated problem-solving.

Specifically, research policy scholars and practitioners should address variation across the “clan” type of organizational culture in bridging institutions. To the general organizational scholar, all bridging institutions have the same culture: decentralized and nonhierarchical. But to research policy scholars and practitioners, there are important differences in the organization and management of what essentially amounts to collectives of highly educated volunteers. How is it that some bridging institutions elicit tremendous contributions from academic faculty and industry researchers, whereas others do not? What aspects of bridging institutions explain what enables academic researchers to work with private companies, spin off their own companies, and/or patent?

These questions point to related questions about different types of bridging institutions. There are research centers emphasizing university/industry interactions for new and existing industries, university technology incubators and proof-of-concept centers focused on business model development and venture capital, regional network nodes for STI, and university science parks co-locating startups and university faculty. Which of these bridging institutions are most appropriate for which sorts of STI? When should bridging institutions be interdisciplinary, cross-sectoral, or both? Are the different types of bridging institutions complements or substitutes for navigating the “valley of death?”

Research policy scholars and practitioners have their work cut out for them. There are no general data tracking cultural heterogeneity across bridging institutions. What data do exist, such as the Research Centers Directory compiled by Gale Research/Cengage Learning, track only the most basic organizational features. Other approaches such as the science of team science hold more promise, though much of this work emphasizes best practices and does not address organizational culture systematically. Research policy scholars and practitioners must develop new data sets that track the intangible cultural aspects of bridging institutions and connect these data to publicly available outcomes data for new knowledge creation, technology development, and workforce development.

Developing systematic understanding of bridging institutions is fundamental to U.S. competitiveness in STI. It is fundamental because bridging institutions are where the rubber hits the road in the U.S. innovation ecosystem. Bridging institutions provide forums for our nation’s top research universities, firms, and government agencies to exchange ideas, engage in coordinated problem solving, and in turn create new knowledge and develop new technologies addressing social and economic problems.

Developing systematic understanding of bridging institutions will be challenging because they are similar on the surface but different in important ways that are difficult to detect. During the 1980s, scholars identified striking differences in the organizational cultures of Silicon Valley and Route 128 microcomputing companies. Today, most bridging institutions follow a similar decentralized model for decisionmaking, with few formalized structures and authorities, yet they can differ widely in performance.

The most important variation across bridging institutions is to be found in the intangible, difficult–to-imitate qualities that allow for (or preclude) the coordination of diverse sets of scientists and engineers from across disciplines, institutions, and sectors. But this does not mean that scholars and practitioners should ignore the structural aspects of bridging institutions. In some cases, bridging institutions may exercise line authority over academic faculty (such as faculty with joint appointments), and these organizations may (or may not) outperform similar bridging institutions that do not exercise line authority.


Craig Boardman () is associate director of the Battelle Center for Science and Technology Policy in the John Glenn School of Public Affairs at Ohio State University.


Eagle

Eagle

GREGORY BENFORD

EAGLE

The long, fat freighter glided into the harbor at late morning—not the best time for a woman who had to keep out of sight.

The sun slowly slid up the sky as tugboats drew them into Anchorage. The tank ship, a big sectioned VLCC, was like an elephant ballerina on the stage of a slate-blue sea, attended by tiny dancing tugs.

Now off duty, Elinor watched the pilot bring them in past the Nikiski Narrows and slip into a long pier with gantries like skeletal arms snaking down, the big pump pipes attached. They were ready for the hydrogen sulfide to flow. The ground crew looked anxious, scurrying around, hooting and shouting. They were behind schedule.

Inside, she felt steady, ready to destroy all this evil stupidity.

She picked up her duffel bag, banged a hatch shut, and walked down to the shore desk. Pier teams in gasworkers’ masks were hooking up pumps to offload and even the faint rotten egg stink of the hydrogen sulfide made her hold her breath. The Bursar checked her out, reminding her to be back within 28 hours. She nodded respectfully, and her maritime ID worked at the gangplank checkpoint without a second glance. The burly guy there said something about hitting the bars and she wrinkled her nose. “For breakfast?”

“I seen it, ma’am,” he said, and winked.

She ignored the other crew, solid merchant marine types. She had only used her old engineer’s rating to get on this freighter, not to strike up the chords of the Seamen’s Association song.

She hit the pier and boarded the shuttle to town, jostling onto the bus, anonymous among boat crews eager to use every second of shore time. Just as she’d thought, this was proving the best way to get in under the security perimeter. No airline manifest, no Homeland Security ID checks. In the unloading, nobody noticed her, with her watch cap pulled down and baggy jeans. No easy way to even tell she was a woman.

Now to find a suitably dingy hotel. She avoided central Anchorage and kept to the shoreline, where small hotels from the TwenCen still did business. At a likely one on Sixth Avenue, the desk clerk told her there were no rooms left.

“With all the commotion at Elmendorf, ever’ damn billet in town’s packed,” the grizzled guy behind the counter said.

She looked out the dirty window, pointed. “What’s that?”

“Aw, that bus? Well, we’re gettin’ that ready to rent, but—”

“How about half price?”

“You don’t want to be sleeping in that—”

“Let me have it,” she said, slapping down a $50 bill.

“Uh, well.” He peered at her. “The owner said—”

“Show it to me.”

She got him down to $25 when she saw that it really was a “retired bus.” Something about it she liked, and no cops would think of looking in the faded yellow wreck. It had obviously fallen on hard times after it had served the school system.

It held a jumble of furniture, apparently to give it a vaguely homelike air. The driver’s seat and all else were gone, leaving holes in the floor. The rest was an odd mix of haste and taste. A walnut Victorian love seat with a medallion backrest held the center, along with a lumpy bed. Sagging upholstery and frayed cloth, cracked leather, worn wood, chipped veneer, a radio with the knobs askew, a patched-in shower closet, and an enamel basin toilet illuminated with a warped lamp completed the sad tableau. A generator chugged outside as a clunky gas heater wheezed. Authentic, in its way.

Restful, too. She pulled on latex gloves the moment the clerk left, and took a nap, knowing she would not soon sleep again. No tension, no doubts. She was asleep in minutes.

Time for the recon. At the rental place she’d booked, she picked up the wastefully big Ford SUV. A hybrid, though. No problem with the credit card, which looked fine at first use, then erased its traces with a virus that would propagate in the rental system, snipping away all records.

The drive north took her past the air base but she didn’t slow down, just blended in with late afternoon traffic. Signs along the highway now had to warn about polar bears, recent migrants to the land and even more dangerous than the massive local browns. The terrain was just as she had memorized it on Google Earth, the likely shooting spots isolated, thickly wooded. The Internet maps got the seacoast wrong, though. Two Inuit villages had recently sprung up along the shore within Elmendorf, as one of their people, posing as a fisherman, had observed and photographed. Studying the pictures, she’d thought they looked slightly ramshackle, temporary, hastily thrown up in the exodus from the tundra regions. No need to last, as the Inuit planned to return north as soon as the Arctic cooled. The makeshift living arrangements had been part of the deal with the Arctic Council for the experiments to make that possible. But access to post schools, hospitals, and the PX couldn’t make this home to the Inuit, couldn’t replace their “beautiful land,” as the word used by the Labrador peoples named it.

So, too many potential witnesses there. The easy shoot from the coast was out. She drove on. The enterprising Inuit had a brand new diner set up along Glenn Highway, offering breakfast anytime to draw odd-houred Elmendorf workers, and she stopped for coffee. Dark men in jackets and jeans ate solemnly in the booths, not saying much. A young family sat across from her, the father trying to eat while bouncing his small wiggly daughter on one knee, the mother spooning eggs into a gleefully uncooperative toddler while fielding endless questions from her bespectacled school-age son. The little girl said something to make her father laugh, and he dropped a quick kiss on her shining hair. She cuddled in, pleased with herself, clinging tight as a limpet.

They looked harried but happy, close-knit and complete. Elinor flashed her smile, tried striking up conversations with the tired, taciturn workers, but learned nothing useful from any of them.

Going back into town, she studied the crews working on planes lined up at Elmendorf. Security was heavy on roads leading into the base so she stayed on Glenn. She parked the Ford as near the railroad as she could and left it. Nobody seemed to notice.

At seven, the sun still high overhead, she came down the school bus steps, a new creature. She swayed away in a long-skirted yellow dress with orange Mondrian lines, her shoes casual flats, carrying a small orange handbag. Brushed auburn hair, artful makeup, even long artificial eyelashes. Bait.

She walked through the scruffy district off K Street, observing as carefully as on her morning reconnaissance. The second bar was the right one. She looked over her competition, reflecting that for some women, there should be a weight limit for the purchase of spandex. Three guys with gray hair were trading lies in a booth and checking her out. The noisiest of them, Ted, got up to ask her if she wanted a drink. Of course she did, though she was thrown off by his genial warning, “Lady, you don’t look like you’re carryin’.”

Rattled—had her mask of harmless approachability slipped?—she made herself smile, and ask, “Should I be?”

“Last week a brown bear got shot not two blocks from here, goin’ through trash. The polars are bigger, meat-eaters, chase the young males out of their usual areas, so they’re gettin’ hungry, and mean. Came at a cop, so the guy had to shoot it. It sent him to the ICU, even after he put four rounds in it.” Not the usual pickup line, but she had them talking about themselves. Soon, she had most of what she needed to know about SkyShield.

“We were all retired refuel jockeys,” Ted said. “Spent most of 30 years flyin’ up big tankers full of jet fuel, so fighters and B-52s could keep flyin’, not have to touch down.”

Elinor probed, “So now you fly—”

“Same aircraft, most of ’em 40 years old—KC Stratotankers, or Extenders—they extend flight times, y’see.”

His buddy added, “The latest replacements were delivered just last year, so the crates we’ll take up are obsolete. Still plenty good enough to spray this new stuff, though.”

“I heard it was poison,” she said.

“So’s jet fuel,” the quietest one said. “But it’s cheap, and they needed something ready to go now, not that dust-scatter idea that’s still on the drawing board.”

Ted snorted. “I wish they’d gone with dustin’—even the traces you smell when they tank up stink like rottin’ eggs. More than a whiff, though, and you’re already dead. God, I’m sure glad I’m not a tank tech.”

“It all starts tomorrow?” Elinor asked brightly.

“Right, 10 KCs takin’ off per day, returnin’ the next from Russia. Lots of big-ticket work for retired duffers like us.”

“Who’re they?” she asked, gesturing to the next table. She had overheard people discussing nozzles and spray rates. “Expert crew,” Ted said. “They’ll ride along to do the measurements of cloud formation behind us, check local conditions like humidity and such.”

She eyed them. All very earnest, some a tad professorial. They were about to go out on an exciting experiment, ready to save the planet, and the talk was fast, eyes shining, drinks all around.

“Got to freshen up, boys.” She got up and walked by the tables, taking three quick shots in passing of the whole lot of them, under cover of rummaging through her purse. Then she walked around a corner toward the rest rooms, and her dress snagged on a nail in the wooden wall. She tried to tug it loose, but if she turned to reach the snag, it would rip the dress further. As she fished back for it with her right hand, a voice said, “Let me get that for you.”

Not a guy, but one of the women from the tech table. She wore a flattering blouse with comfortable, well-fitted jeans, and knelt to unhook the dress from the nail head.

“Thanks,” Elinor said, and the woman just shrugged, with a lopsided grin.

“Girls should stick together here,” the woman said. “The guys can be a little rough.”

“Seem so.”

“Been here long? You could join our group—always room for another woman, up here! I can give you some tips, introduce you to some sweet, if geeky, guys.”

“No, I… I don’t need your help.” Elinor ducked into the women’s room.

She thought about this unexpected, unwanted friendliness while sitting in the stall, and put it behind her. Then she went back into the game, fishing for information in a way she hoped wasn’t too obvious. Everybody likes to talk about their work, and when she got back to the pilots’ table, the booze worked in her favor. She found out some incidental information, probably not vital, but it was always good to know as much as you could. They already called the redesigned planes “Scatter Ships” and their affection for the lumbering, ungainly aircraft was reflected in banter about unimportant engineering details and tales of long-ago combat support missions.

One of the big guys with a wide grin sliding toward a leer was buying her a second martini when her cell rang.

“Albatross okay. Our party starts in 30 minutes,” said a rough voice. “You bring the beer.”

She didn’t answer, just muttered, “Damned salesbots…,” and disconnected.

She told the guy she had to “tinkle,” which made him laugh. He was a pilot just out of the Air Force, and she would have gone for him in some other world than this one. She found the back exit—bars like this always had one—and was blocks away before he would even begin to wonder.

THIS MIGHT BE THE LAST TIME SHE WOULD SEE SUCH ABUNDANT, GLOWING LIFE,

Anchorage slid past unnoticed as she hurried through the broad deserted streets, planning. Back to the bus, out of costume, into all-weather gear, boots, grab some trail mix and an already-filled backpack. Her thermos of coffee she wore on her hip.

She cut across Elderberry Park, hurrying to the spot where her briefing said the trains paused before running into the depot. The port and rail lines snugged up against Elmendorf Air Force Base, convenient for them, and for her.

The freight train was a long clanking string and she stood in the chill gathering darkness, wondering how she would know where they were. The passing autorack cars had heavy shutters, like big steel Venetian blinds, and she could not see how anybody got into them.

But as the line clanked and squealed and slowed, a quick laser flash caught her, winked three times. She ran toward it, hauling up onto a slim platform at the foot of a steel sheet.

It tilted outward as she scrambled aboard, thudding into her thigh, nearly knocking her off. She ducked in and saw by the distant streetlights the vague outlines of luxury cars. A Lincoln sedan door swung open. Its interior light came on and she saw two men in the front seats. She got in the back and closed the door. Utter dark.

“It clear out there?” the cell phone voice asked from the driver’s seat.

“Yeah. What—”

“Let’s unload. You got the SUV?”

“Waiting on the nearest street.”

“How far?”

“Hundred meters.”

The man jigged his door open, glanced back at her. “We can make it in one trip if you can carry 20 kilos.”

“Sure,” though she had to pause to quickly do the arithmetic, 44 pounds. She had backpacked about that much for weeks in the Sierras. “Yeah, sure.”

The missile gear was in the trunks of three other sedans, at the far end of the autorack. As she climbed out of the car the men had inhabited, she saw the debris of their trip—food containers in the back seats, assorted junk, the waste from days spent coming up from Seattle. With a few gallons of gas in each car, so they could be driven on and off, these two had kept warm running the heater. If that ran dry, they could switch to another.

As she understood it, this degree of mess was acceptable to the railroads and car dealers. If the railroad tried to wrap up the autoracked cars to keep them out, the bums who rode the rails would smash windshields to get in, then shit in the cars, knife the upholstery. So they had struck an equilibrium. That compromise inadvertently produced a good way to ship weapons right by Homeland Security. She wondered what Homeland types would make of a Dart, anyway. Could they even tell what it was?

The rough-voiced man turned and clicked on a helmet lamp. “I’m Bruckner. This is Gene.”

Nods. “I’m Elinor.” Nods, smiles. Cut to the chase. “I know their flight schedule.”

Bruckner smiled thinly. “Let’s get this done.”

Transporting the parts in via autoracked cars was her idea. Bringing them in by small plane was the original plan, but Homeland might nab them at the airport. She was proud of this slick workaround.

“Did railroad inspectors get any of you?” Elinor asked.

Gene said, “Nope. Our two extras dropped off south of here. They’ll fly back out.”

With the auto freights, the railroad police looked for tramps sleeping in the seats. No one searched the trunks. So they had put a man on each autorack, and if some got caught, they could distract from the gear. The men would get a fine, be hauled off for a night in jail, and the shipment would go on.

“Luck is with us,” Elinor said. Bruckner looked at her, looked more closely, opened his mouth, but said nothing.

They both seemed jumpy by the helmet light. “How’d you guys live this way?” she asked, to get them relaxed.

“Pretty poorly,” Gene said. “We had to shit in bags.”

She could faintly smell the stench. “More than I need to know.”

Using Bruckner’s helmet light they hauled the assemblies out, neatly secured in backpacks. Bruckner moved with strong, graceless efficiency. Gene too. She hoisted hers on, grunting.

The freight started up, lurching forward. “Damn!” Gene said.

They hurried. When they opened the steel flap, she hesitated, jumped, stumbled on the gravel, but caught herself. Nobody within view in the velvet cloaking dusk.

AND SHE SUCKED IT IN, TRYING TO LODGE IT IN HER HEART FOR TIMES TO COME.

They walked quietly, keeping steady through the shadows. It got cold fast, even in late May. At the Ford they put the gear in the back and got in. She drove them to the old school bus. Nobody talked.

She stopped them at the steps to the bus. “Here, put these gloves on.”

They grumbled but they did it. Inside, heater turned to high, Bruckner asked if she had anything to drink. She offered bottles of vitamin water but he waved it away. “Any booze?”

Gene said, “Cut that out.”

The two men eyed each other and Elinor thought about how they’d been days in those cars and decided to let it go. Not that she had any liquor, anyway.

Bruckner was lean, rawboned, and self-contained, with minimal movements and a constant, steady gaze in his expressionless face. “I called the pickup boat. They’ll be waiting offshore near Eagle Bay by eight.”

Elinor nodded. “First flight is 9:00 a.m.. It’ll head due north so we’ll see it from the hills above Eagle Bay.”

Gene said, “So we get into position… when?”

“Tonight, just after dawn.”

Bruckner said, “I do the shoot.”

“And we handle perimeter and setup, yes.”

“How much trouble will we have with the Indians?”

Elinor blinked. “The Inuit settlement is down by the seashore. They shouldn’t know what’s up.”

Bruckner frowned. “You sure?”

“That’s what it looks like. Can’t exactly go there and ask, can we?”

Bruckner sniffed, scowled, looked around the bus. “That’s the trouble with this nickel-and-dime operation. No real security.”

Elinor said, “You want security, buy a bond.”

Bruckner’s head jerked around. “Whassat mean?”

She sat back, took her time. “We can’t be sure the DARPA people haven’t done some serious public relations work with the Natives. Besides, they’re probably all in favor of SkyShield anyway—their entire way of life is melting away with the sea ice. And by the way, they’re not “Indians,” they’re “Inuit.”

“You seem pretty damn sure of yourself.”

“People say it’s one of my best features.”

Bruckner squinted and said, “You’re—”

“A maritime engineering officer. That’s how I got here and that’s how I’m going out.”

“You’re not going with us?”

“Nope, I go back out on my ship. I have first engineering watch tomorrow, 0100 hours.” She gave him a hard, flat look. “We go up the inlet, past Birchwood Airport. I get dropped off, steal a car, head south to Anchorage, while you get on the fishing boat, they work you out to the headlands. The bigger ship comes in, picks you up. You’re clear and away.”

Bruckner shook his head. “I thought we’d—”

“Look, there’s a budget and—”

“We’ve been holed up in those damn cars for—”

“A week, I know. Plans change.”

“I don’t like changes.”

“Things change,” Elinor said, trying to make it mild.

But Bruckner bristled. “I don’t like you cutting out, leaving us—”

“I’m in charge, remember.” She thought, He travels the fastest who travels alone.

“I thought we were all in this together.”

She nodded. “We are. But Command made me responsible, since this was my idea.”

His mouth twisted. “I’m the shooter, I—”

“Because I got you into the Ecuador training. Me and Gene, we depend on you.” Calm, level voice. No need to provoke guys like this; they did it enough on their own.

Silence. She could see him take out his pride, look at it, and decide to wait a while to even the score.

Bruckner said, “I gotta stretch my legs,” and clumped down the steps and out of the bus.

Elinor didn’t like the team splitting and thought of going after him. But she knew why Bruckner was antsy—too much energy with no outlet. She decided just to let him go.

To Gene she said, “You’ve known him longer. He’s been in charge of operations like this before?”

Gene thought. “There’ve been no operations like this.”

“Smaller jobs than this?”

“Plenty.”

She raised her eyebrows. “Surprising.”

“Why?”

“He walks around using that mouth, while he’s working?”

Gene chuckled. “ ’Fraid so. He gets the job done though.”

“Still surprising.”

That he’s the shooter, or—”

“That he still has all his teeth.

While Gene showered, she considered. Elinor figured Bruckner for an injustice collector, the passive-aggressive loser type. But he had risen quickly in The LifeWorkers, as they called themselves, brought into the inner cadre that had formulated this plan. Probably because he was willing to cross the line, use violence in the cause of justice. Logically, she should sympathize with him, because he was a lot like her.

But sympathy and liking didn’t work that way.

There were people who soon would surely yearn to read her obituary, and Bruckner’s too, no doubt. He and she were the cutting edge of environmental activism, and these were desperate times indeed. Sometimes you had to cross the line, and be sure about it.

Elinor had made a lot of hard choices. She knew she wouldn’t last long on the scalpel’s edge of active environmental justice, and that was fine by her. Her role would soon be to speak for the true cause. Her looks, her brains, her charm—she knew she’d been chosen for this mission, and the public one afterward, for these attributes, as much as for the plan she had devised. People listen, even to ugly messages, when the face of the messenger is pretty. And once they finished here, she would have to be heard.

She and Gene carefully unpacked the gear and started to assemble the Dart. The parts connected with a minimum of wiring and socket clasps, as foolproof as possible. They worked steadily, assembling the tube, the small recoil-less charge, snapping and clicking the connections.

Gene said, “The targeting antenna has a rechargeable battery, they tend to drain. I’ll top it up.”

She nodded, distracted by the intricacies of a process she had trained for a month ago. She set the guidance system. Tracking would first be infrared only, zeroing in on the target’s exhaust, but once in the air and nearing its goal, it would use multiple targeting modes—laser, IR, advanced visual recognition—to get maximal impact on the main body of the aircraft.

They got it assembled and stood back to regard the linear elegance of the Dart. It had a deadly, snakelike beauty, its shiny white skin tapered to a snub point.

“Pretty, yeah,” Gene said. “And way better than any Stinger. Next generation, smarter, near four times the range.”

She knew guys liked anything that could shoot, but to her it was just a tool. She nodded.

Gene caressed the lean body of the Dart, and smiled.

Bruckner came clumping up the bus stairs with a fixed smile on his face that looked like it had been delivered to the wrong address. He waved a lit cigarette. Elinor got up, forced herself to smile. “Glad you’re back, we—”

“Got some ’freshments,” he said, dangling some beers in their six-pack plastic cradle, and she realized he was drunk.

The smile fell from her face like a picture off a wall.

She had to get along with these two, but this was too much. She stepped forward, snatched the beer bottles and tossed them onto the Victorian love seat. “No more.”

Bruckner tensed and Gene sucked in a breath. Bruckner made a move to grab the beers and Elinor snatched his hand, twisted the thumb back, turned hard to ward off a blow from his other hand—and they froze, looking into each other’s eyes from a few centimeters away.

Silence.

Gene said, “She’s right, y’know.”

More silence.

Bruckner sniffed, backed away. “You don’t have to be rough.”

“I wasn’t.”

They looked at each other, let it go.

She figured each of them harbored a dim fantasy of coming to her in the brief hours of darkness. She slept in the lumpy bed and they made do with the furniture. Bruckner got the love seat—ironic victory—and Gene sprawled on a threadbare comforter.

Bruckner talked some but dozed off fast under booze, so she didn’t have to endure his testosterone-fueled patter. But he snored, which was worse.

The men napped and tossed and worried. No one bothered her, just as she wanted it. But she kept a small knife in her hand, in case. For her, sleep came easily.

After eating a cold breakfast, they set out before dawn, 2:30 a.m., Elinor driving. She had decided to wait till then because they could mingle with early morning Air Force workers driving toward the base. This far north, it started brightening by 3:30, and they’d be in full light before 5:00. Best not to stand out as they did their last reconnaissance. It was so cold she had to run the heater for five minutes to clear the windshield of ice. Scraping with her gloved hands did nothing.

The men had grumbled about leaving absolutely nothing behind. “No traces,” she said. She wiped down every surface, even though they’d worn medical gloves the whole time in the bus.

Gene didn’t ask why she stopped and got a gas can filled with gasoline, and she didn’t say. She noticed the wind was fairly strong and from the north, and smiled. “Good weather. Prediction’s holding up.”

Bruckner said sullenly, “Goddamn cold.”

“The KC Extenders will take off into the wind, head north.” Elinor judged the nearly cloud-free sky. “Just where we want them to be.”

They drove up a side street in Mountain View and parked overlooking the fish hatchery and golf course, so she could observe the big tank refuelers lined up at the loading site. She counted five KC-10 Extenders, freshly surplussed by the Air Force. Their big bellies reminded her of pregnant whales.

From their vantage point, they could see down to the temporarily expanded checkpoint, set up just outside the base. As foreseen, security was stringently tight this near the airfield—all drivers and passengers had to get out, be scanned, IDs checked against global records, briefcases and purses searched. K-9 units inspected car interiors and trunks. Explosives-detecting robots rolled under the vehicles.

She fished out binoculars and focused on the people waiting to be cleared. Some carried laptops and backpacks and she guessed they were the scientists flying with the dispersal teams. Their body language was clear. Even this early, they were jazzed, eager to go, excited as kids on a field trip. One of the pilots had mentioned there would be some sort of preflight ceremony, honoring the teams that had put all this together. The flight crews were studiedly nonchalant—this was an important, high-profile job, sure, but they couldn’t let their cool down in front of so many science nerds. She couldn’t see well enough to pick out Ted, or the friendly woman from the bar.

In a special treaty deal with the Arctic Council, they would fly from Elmendorf and arc over the North Pole, spreading hydrogen sulfide in their wakes. The tiny molecules of it would mate with water vapor in the stratospheric air, making sulfurics. Those larger, wobbly molecules reflected sunlight well—a fact learned from studying volcano eruptions back in the TwenCen. Spray megatons of hydrogen sulfide into the stratosphere, let water turn it into a sunlight-bouncing sheet—SkyShield—and they could cool the entire Arctic.

Or so the theory went. The Arctic Council had agreed to this series of large-scale experiments, run by the USA since they had the in-flight refuelers that could spread the tiny molecules to form the SkyShield. Small-scale experiments—opposed, of course, by many enviros—had seemed to work. Now came the big push, trying to reverse the retreat of sea ice and warming of the tundra.

Anchorage lay slightly farther north than Oslo, Helsinki, and Stockholm, but not as far north as Reykjavik or Murmansk. Flights from Anchorage to Murmansk would let them refuel and reload hydrogen sulfide at each end, then follow their paths back over the pole. Deploying hydrogen sulfide along their flight paths at 45,000 feet, they would spread a protective layer to reflect summer sunlight. In a few months, the sulfuric droplets would ease down into the lower atmosphere, mix with moist clouds, and come down as rain or snow, a minute, undetectable addition to the acidity already added by industrial pollutants. Experiment over.

The total mass delivered was far less than that from volcanoes like Pinatubo, which had cooled the whole planet in 1991–92. But volcanoes do messy work, belching most of their vomit into the lower atmosphere. This was to be a designer volcano, a thin skin of aerosols skating high across the stratosphere.

It might stop the loss of the remaining sea ice, the habitat of the polar bear. Only 10% of the vast original cooling sheets remained. Equally disruptive changes were beginning to occur in other parts of the world.

But geoengineered tinkerings would also be a further excuse to delay cutbacks in carbon dioxide emissions. People loved convenience, their air conditioning and winter heating and big lumbering SUVs. Humanity had already driven the air’s CO2 content to twice what it was before 1800, and with every developing country burning oil and coal as fast as they could extract them, only dire emergency could drive them to abstain. To do what was right.

The greatest threat to humanity arose not from terror, but error. Time to take the gloves off.

She put the binocs away and headed north. The city’s seacoast was mostly rimmed by treacherous mudflats, even after the sea kept rising. Still, there were coves and sandbars of great beauty. Elinor drove off Glenn Highway to the west, onto progressively smaller, rougher roads, working their way backcountry by Bureau of Land Management roads to a sagging, long-unused access gate for loggers. Bolt cutters made quick work of the lock securing its rusty chain closure. After she pulled through, Gene carefully replaced the chain and linked it with an equally rusty padlock, brought for this purpose. Not even a thorough check would show it had been opened, till the next time BLM tried to unlock it. They were now on Elmendorf, miles north of the airfield, far from the main base’s bustle and security precautions. Thousands of acres of mudflats, woods, lakes, and inlet shoreline lay almost untouched, used for military exercises and not much else. Nobody came here except for infrequent hardy bands of off-duty soldiers or pilots, hiking with maps red-marked UXO for “Unexploded Ordnance.” Lost live explosives, remnants of past field maneuvers, tended to discourage casual sightseers and trespassers, and the Inuit villagers wouldn’t be berry-picking till July and August. She consulted her satellite map, then took them on a side road, running up the coast. They passed above a cove of dark blue waters.

Beauty. Pure and serene.

The sea-level rise had inundated many of the mudflats and islands, but a small rocky platform lay near shore, thick with trees. Driving by, she spotted a bald eagle perched at the top of a towering spruce tree. She had started birdwatching as a Girl Scout and they had time; she stopped.

She left the men in the Ford and took out her long-range binocs. The eagle was grooming its feathers and eyeing the fish rippling the waters offshore. Gulls wheeled and squawked, and she could see sea lions knifing through fleeing shoals of herring, transient dark islands breaking the sheen of waves. Crows joined in onshore, hopping on the rocks and pecking at the predators’ leftovers.

She inhaled the vibrant scent of ripe wet salty air, alive with what she had always loved more than any mere human. This might be the last time she would see such abundant, glowing life, and she sucked it in, trying to lodge it in her heart for times to come.

She was something of an eagle herself, she saw now, as she stood looking at the elegant predator. She kept to herself, loved the vibrant natural world around her, and lived by making others pay the price of their own foolishness. An eagle caught hapless fish. She struck down those who would do evil to the real world, the natural one.

Beyond politics and ideals, this was her reality.

Then she remembered what else she had stopped for. She took out her cell phone and pinged the alert number.

A buzz, then a blurred woman’s voice. “Able Baker.”

“Confirmed. Get a GPS fix on us now. We’ll be here, same spot, for pickup in two to three hours. Assume two hours.”

Buzz buzz. “Got you fixed. Timing’s okay. Need a Zodiac?”

“Yes, definite, and we’ll be moving fast.”

“You bet. Out.”

Back in the cab, Bruckner said, “What was that for?”

“Making the pickup contact. It’s solid.”

“Good. But I meant, what took so long.”

She eyed him levelly. “A moment spent with what we’re fighting for.”

Bruckner snorted. “Let’s get on with it.”

Elinor looked at Bruckner and wondered if he wanted to turn this into a spitting contest just before the shoot.

“Great place,” Gene said diplomatically.

That broke the tension and she started the Ford.

They rose further up the hills northeast of Anchorage, and at a small clearing, she pulled off to look over the landscape. To the east, mountains towered in lofty gray majesty, flanks thick with snow. They all got out and surveyed the terrain and sight angles toward Anchorage. The lowlands were already thick with summer grasses, and the winds sighed southward through the tall evergreens.

Gene said, “Boy, the warming’s brought a lot of growth.”

Elinor glanced at her watch and pointed. “The KCs will come from that direction, into the wind. Let’s set up on that hillside.”

They worked around to a heavily wooded hillside with a commanding view toward Elmendorf Air Force Base. “This looks good,” Bruckner said, and Elinor agreed.

“Damn—a bear!” Gene cried.

They looked down into a narrow canyon with tall spruce. A large brown bear was wandering along a stream about a hundred meters away.

Elinor saw Bruckner haul out a .45 automatic. He cocked it.

When she glanced back the bear was looking toward them. It turned and started up the hill with lumbering energy.

“Back to the car,” she said.

The bear broke into a lope.

Bruckner said, “Hell, I could just shoot it. This is a good place to see the takeoff and—”

“No. We move to the next hill.”

Bruckner said, “I want—”

“Go!”

They ran.

One hill farther south, Elinor braced herself against a tree for stability and scanned the Elmendorf landing strips. The image wobbled as the air warmed across hills and marshes.

Lots of activity. Three KC-10 Extenders ready to go. One tanker was lined up on the center lane and the other two were moving into position.

“Hurry!” she called to Gene, who was checking the final setup menu and settings on the Dart launcher.

He carefully inserted the missile itself in the launcher. He checked, nodded and lifted it to Bruckner. They fitted the shoulder straps to Bruckner, secured it, and Gene turned on the full arming function. “Set!” he called.

Elinor saw a slight stirring of the center Extender and it began to accelerate. She checked: right on time, 0900 hours. Hard-core military like Bruckner, who had been a Marine in the Middle East, called Air Force the “saluting Civil Service,” but they did hit their markers. The Extenders were not military now, just surplus, but flying giant tanks of sloshing liquid around the stratosphere demands tight standards.

“I make the range maybe 20 kilometers,” she said. “Let it pass over us, hit it close as it goes away.”

Bruckner grunted, hefted the launcher. Gene helped him hold it steady, taking some of the weight. Loaded, it weighed nearly 50 pounds. The Extender lifted off, with a hollow, distant roar that reached them a few seconds later, and Elinor could see that media coverage was high. Two choppers paralleled the takeoff for footage, then got left behind.

The Extender was a full-extension DC-10 airframe and it came nearly straight toward them, growling through the chilly air. She wondered if the chatty guy from the bar, Ted, was one of the pilots. Certainly, on a maiden flight the scientists who ran this experiment would be on board, monitoring performance. Very well.

“Let it get past us,” she called to Bruckner.

He took his head from the eyepiece to look at her. “Huh? Why—”

“Do it. I’ll call the shot.”

“But I’m—”

“Do it.”

The airplane was rising slowly and flew by them a few kilometers away.

“Hold, hold…” she called. “Fire.”

Bruckner squeezed the trigger and the missile popped out—whuff!—seemed to pause, then lit. It roared away, startling in its speed—straight for the exhausts of the engines, then correcting its vectors, turning, and rushing for the main body. Darting.

It hit with a flash and the blast came rolling over them. A plume erupted from the airplane, dirty black.

“Bruckner! Resight—the second plane is taking off.”

She pointed. Gene chunked the second missile into the Dart tube. Bruckner swiveled with Gene’s help. The second Extender was moving much too fast, and far too heavy, to abort takeoff.

The first airplane was coming apart, rupturing. A dark cloud belched across the sky.

Elinor said clearly, calmly, “The Dart’s got a max range about right so… shoot.”

Bruckner let fly and the Dart rushed off into the sky, turned slightly as it sighted, accelerated like an angry hornet. They could hardly follow it. The sky was full of noise.

“Drop the launcher!” she cried.

“What?” Bruckner said, eyes on the sky.

She yanked it off him. He backed away and she opened the gas can as the men watched the Dart zooming toward the airplane. She did not watch the sky as she doused the launcher and splashed gas on the surrounding brush.

“Got that lighter?” she asked Bruckner.

He could not take his eyes off the sky. She reached into his right pocket and took out the lighter. Shooters had to watch, she knew.

She lit the gasoline and it went up with a whump.

“Hey! Let’s go!” She dragged the men toward the car.

They saw the second hit as they ran for the Ford. The sound got buried in the thunder that rolled over them as the first Extender hit the ground kilometers away, across the inlet. The hard clap shook the air, made Gene trip, then stagger forward.

She started the Ford and turned away from the thick column of smoke rising from the launcher. It might erase any fingerprints or DNA they’d left, but it had another purpose too.

She took the run back toward the coast at top speed. The men were excited, already reliving the experience, full of words. She said nothing, focused on the road that led them down to the shore. To the north, a spreading dark pall showed where the first plane went down.

One glance back at the hill told her the gasoline had served as a lure. A chopper was hammering toward the column of oily smoke, buying them some time.

The men were hooting with joy, telling each other how great it had been. She said nothing.

She was happy in a jangling way. Glad she’d gotten through without the friction with Bruckner coming to a point, too. Once she’d been dropped off, well up the inlet, she would hike around a bit, spend some time birdwatching, exchange horrified words with anyone she met about that awful plane crash—No, I didn’t actually see it, did you?—and work her way back to the freighter, slipping by Elmendorf in the chaos that would be at crescendo by then. Get some sleep, if she could.

They stopped above the inlet, leaving the Ford parked under the thickest cover they could find. She looked for the eagle, but didn’t see it. Frightened skyward by the bewildering explosions and noises, no doubt. They ran down the incline. She thumbed on her comm, got a crackle of talk, handed it to Bruckner. He barked their code phrase, got confirmation.

A Zodiac was cutting a V of white, homing in on the shore. The air rumbled with the distant beat of choppers and jets, the search still concentrated around the airfield. She sniffed the rotten egg smell, already here from the first Extender. It would kill everything near the crash, but this far off should be safe, she thought, unless the wind shifted. The second Extender had gone down closer to Anchorage, so it would be worse there. She put that out of her mind.

Elinor and the men hurried down toward the shore to meet the Zodiac. Bruckner and Gene emerged ahead of her as they pushed through a stand of evergreens, running hard. If they got out to the pickup craft, then suitably disguised among the fishing boats, they might well get away.

But on the path down, a stocky Inuit man stood. Elinor stopped, dodged behind a tree.

Ahead of her, Bruckner shouted, “Out of the way!”

The man stepped forward, raised a shotgun. She saw something compressed and dark in his face.

“You shot down the planes?” he demanded.

A tall Inuit racing in from the side shouted, “I saw their car comin’ from up there!”

Bruckner slammed to a stop, reached down for his .45 automatic—and froze. The double-barreled shotgun could not miss at that range.

It had happened so fast. She shook her head, stepped quietly away. Her pulse hammered as she started working her way back to the Ford, slipping among the trees. The soft loam kept her footsteps silent.

A third man came out of the trees ahead of her. She recognized him as the young Inuit father from the diner, and he cradled a black hunting rifle. “Stop!”

She stood still, lifted her binocs. “I’m bird watching, what—”

“I saw you drive up with them.”

A deep, brooding voice behind her said, “Those planes were going to stop the warming, save our land, save our people.”

She turned to see another man pointing a large caliber rifle. “I, I, the only true way to do that is by stopping the oil companies, the corporations, the burning of fossil—”

The shotgun man, eyes burning beneath heavy brows, barked, “What’ll we do with ‘em?”

She talked fast, hands up, open palms toward him. “All that SkyShield nonsense won’t stop the oceans from turning acid. Only fossil—”

“Do what you can, when you can. We learn that up here.” This came from the tall man. The Inuit all had their guns trained on them now. The tall man gestured with his and they started herding the three of them into a bunch. The men’s faces twitched, fingers trembled.

The man with the shotgun and the man with the rifle exchanged nods, quick words in a complex, guttural language she could not understand. The rifleman seemed to dissolve into the brush, steps fast and flowing, as he headed at a crouching dead run down to the shoreline and the waiting Zodiac.

She sucked in the clean sea air and could not think at all. These men wanted to shoot all three of them and so she looked up into the sky to not see it coming. High up in a pine tree with a snapped top an eagle flapped down to perch. She wondered if this was the one she had seen before.

The oldest of the men said, “We can’t kill them. Let ‘em rot in prison.”

The eagle settled in. Its sharp eyes gazed down at her and she knew this was the last time she would ever see one. No eagle would ever live in a gray box. But she would. And never see the sky.


Is U.S. Science in Decline?

Is U.S. Science in Decline?

YU XIE

Is U.S. Science in Decline?

The nation’s position relative to other countries is changing, but this need not be reason for alarm.

Who are the most important U.S. scientists today?” Our host posed the question to his guests at a dinner that I attended in 2003. Americans like to talk about politicians, entertainers, athletes, writers, and entrepreneurs, but rarely, if ever, scientists. Among a group of six academics from elite U.S. universities at the dinner, no one could name a single outstanding contemporary U.S. scientist.

This was not always so. For much of the 20th century, Albert Einstein was a household-name celebrity in the United States, and every academic was familiar with names such as James Watson, Enrico Fermi, and Edwin Hubble. Today, however, Americans’ interest in pure science, unlike their interest in new “apps,” seems to have waned. Have the nation’s scientific achievements and strengths also lessened? Indeed, scholars and politicians alike have begun to worry that U.S. science may be in decline.

If the United States loses its dominance in science, historians of science would be the last group to be surprised. Historically, the world center of science has shifted several times, from Renaissance Italy to England in the 17th century, to France in the 18th century, and to Germany in the 19th century, before crossing the Atlantic in the early 20th century to the United States. After examining the cyclical patterns of science centers in the world with historical data, Japanese historian of science Mitsutomo Yuasa boldly predicted in1962 that “the scientific prosperity of [the] U.S.A., begun in 1920, will end in 2000.”

Needless to say, Yuasa’s prediction was wrong. By all measures, including funding, total scientific output, highly influential scientific papers, and Nobel Prize winners, U.S. leadership in science remains unparalleled today. Containing only 5% of the world’s total population, the United States can consistently claim responsibility for one- to two-thirds of the world’s scientific activities and accomplishments. Present-day U.S. science is not a simple continuation of science as it was practiced earlier in Europe. Rather, it has several distinctive new characteristics: It employs a very large labor force; it requires a great deal of funding from both government and industry; and it resembles other professions such as medicine and law in requiring systematic training for entry and compensating for services with financial, as well as nonfinancial, rewards. All of these characteristics of modern science are the result of dramatic and integral developments in science, technology, industry, and education in the United States over the course of the 20th century. In the 21st century, however, a debate has emerged concerning U.S. ability to maintain its world leadership in the future.

The U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force.

The debate involves two opposing views. The first view is that U.S. science, having fallen victim to a new, highly competitive, globalized world order, particularly to the rise of China, India, and other Asian countries, is now declining. Proponents of this alarmist view call for significantly more government investment in science, as stated in two reports issued by the National Academy of Sciences (NAS), the National Academy of Engineering, and the Institute of Medicine: Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future in 2007, and Rising Above the Gathering Storm: Rapidly Approaching Category 5 in 2010.

The second view is that if U.S. science is in trouble, this is because there are too many scientists, not too few. Newly trained scientists have glutted the scientific labor market and contribute low-cost labor to organized science but are unable to become independent and, thus, highly innovative. Proponents of the second view, mostly economists, are quick to point out that claims concerning a shortage of scientific personnel are often made by interest groups—universities, senior scientists, funding agencies, and industries that employ scientifically trained workers—that would benefit from an increased supply of scientists. This view is well articulated in two reports issued by the RAND Corporation in 2007 and 2008 in response to the first NAS report and economist Paula Stephan’s recent book, How Economics Shapes Science.

What do data reveal?

Which view is correct? In a 2012 book I coauthored with Alexandra Killewald, Is American Science in Decline?, we addressed this question empirically, drawing on as much available data as we could find covering the past six decades. After analyzing 18 large, nationally representative data sets, in addition to a wealth of published and Web-based materials, we concluded that neither view is wholly correct, though both have some merit.

Between the 1960s and the present, U.S. science has fared reasonably well on most indicators that we can construct. The following is a summary of the main findings reported in our book.

First, the U.S. scientific labor force, even excluding many occupations such as medicine that require scientific training, has grown faster than the general labor force. Census data show that the scientific labor force has increased steadily since the 1960s. In 1960, science and engineering constituted 1.3% of the total labor force of about 66 million. By 2007, it was 3.3% of a much larger labor force of about 146 million. Of course, between 1960 and 2007, the share of immigrants among scientists increased, at a time when all Americans were becoming better educated. As a result, the percentage of scientists among native-born Americans with at least a college degree has declined over time. However, diversity has increased as women and non-Asian minorities have increased their representation among U.S. scientists.

Second, despite perennial concerns about the performance of today’s students in mathematics and science, today’s U.S. schoolchildren are performing in these areas as well as or better than students in the 1970s. At the postsecondary level, there is no evidence of a decline in the share of graduates receiving degrees in scientific fields. U.S. universities continue to graduate large numbers of young adults well trained in science, and the majority of science graduates do find science-related employment. At the graduate level, the share of foreign students among recipients of science degrees has increased over time. More native-born women now receive science degrees than before, although native-born men have made no gains. Taken together, education data suggest that Americans are doing well, or at least no worse than in the past, at obtaining quality science education and completing science degrees.

Finally, we used a large number of indicators to track changes in society’s general attitudes toward science, including confidence in science, whether to fund basic science, scientists’ prestige, and freshmen interest in science research careers. Those indicators all show that the U.S. public has remained overwhelmingly positive toward scientists and science in general. About 80% of Americans endorse federal funding for scientific research, even if it has no immediate benefits, and about 70% believe that the benefits of science outweigh the costs. These numbers have stayed largely unchanged over recent decades. Americans routinely express greater confidence in the leadership of the scientific community than that of Congress, organized religion, or the press.

Is it possible that Americans support science even though they themselves have no interest in it? To measure Americans’ interest in science, we also analyzed all cover articles published in Newsweek magazine and all books on the New York Times Best Sellers List from 1950 to 2007. From these data, we again observe an overall upward trend in Americans’ interest in science.

Sources of anxiety

What, then, are the sources of anxiety about U.S. science? In Is American Science in Decline?, we identify three of them, two historical and one comparative. First, our analysis of earnings using data from the U.S. decennial censuses revealed that scientists’ earnings have grown very slowly, falling further behind those of other high-status professionals such as doctors and lawyers. This unfavorable trend is particularly pronounced for scientists at the doctoral level.

Second, scientists who seek academic appointments now face greater challenges. Tenure-track positions are in short supply relative to the number of new scientists with doctoral training seeking such positions. As a result, more and more young scientists are now forced to take temporary postdoctoral appointments before finding permanent jobs. Job prospects are particularly poor in biomedical science, which has been well supported by federal funding through the National Institutes of Health. The problem is that the increased spending is mainly in the form of research grants that enhance research labs’ ability to hire temporary research staff, whereas universities are reluctant to expand permanent faculty positions. Some new Ph.D.s in biomedical fields need to take on two or more postdoctoral or temporary positions before having a chance to find a permanent position. It is the poor job outlook for these new Ph.D.s and their relatively low earnings that has led some economists to argue that there is a glut of scientists in the United States.

Third, of course, the greatest source of anxiety concerning U.S. science has been the globalization of science, resulting in greater competition from other countries. Annual news releases reveal the mediocre performance of U.S. schoolchildren on international tests of math and science. The growth of U.S. production of scientific articles has slowed down considerably over the past several decades as compared with that in other areas, particularly East Asia. As a result, the share of world science contributed by the United States is dwindling.

But in some ways, the globalization of science is a result of U.S. science’s success. Science is a public good, and a global one at that. Once discovered, science knowledge is codified and then can be taught and consumed anywhere in the world. The huge success of U.S. science in the 20th century meant that scientists in many less developed countries, such as China and India, could easily build on the existing science foundation largely built by U.S. scientists and make new scientific discoveries. Internet communication and cheap air transportation have also minimized the importance of location, enabling scientists in less developed countries to have access to knowledge, equipment, materials, and collaborators in more developed countries such as the United States.

The globalization of science has also made its presence felt within U.S. borders. More than 25% of practicing U.S. scientists are immigrants, up from 7% in 1960. Almost half of students receiving doctoral degrees in science from U.S. universities are temporary residents. The rising share of immigrants among practicing scientists and engineers indicates that U.S. dependence on foreign-born and foreign-trained scientists has dramatically increased. Although most foreign recipients of science degrees from U.S. universities today prefer to stay in the United States, for both economic and scientific reasons, there is no guarantee that this will last. If the flow of foreign students to U.S. science programs should stop or dramatically decline, or if most foreign students who graduate with U.S. degrees in science should return to their home countries, this could create a shortage of U.S. scientists, potentially affecting the U.S. economy or even national security.

What’s happening in China?

Although international competition doesn’t usually refer to any specific country in discussions of science policy, today’s discourse does tend to refer, albeit implicitly, to a single country: China. In 2009, national headlines revealed that students in Shanghai outscored their peers around the world in math, science, and reading on the Program for International Student Assessment (PISA), a test administered to 15-year-olds in 65 countries. In contrast, the scores of U.S. students were mediocre. Although U.S. students had performed similarly on these comparative tests for a long time, the 2009 PISA results had an unusual effect in sparking a national discussion of the proposition that the United States may soon fall behind China and other countries in science and technology. Secretary of Education Arne Duncan referred to the results as “a wake-up call.”

China is the world’s largest country, with a population of 1.3 billion, and grew economically at an annualized growth rate of 7.7% between 1978 and 2010. Other indicators also suggest that China has been developing its science and technology with the intention of narrowing the gap between itself and the United States. Activities in China indicate its inevitable rise as a powerhouse in science and technology, and it is important to understand what this means for U.S. science.

The Chinese government has spent large sums of money trying to upgrade Chinese science education and improve China’s scientific capability. It more than doubled the number of higher education institutions from 1,022 in 1998 to 2,263 in 2008 and upgraded about 100 elite universities with generous government funding. China’s R&D expenditure has been growing at 20% per year, benefitting both from the increase in gross domestic product (GDP) and the increase in the share of GDP spent on R&D. In addition, the government has devised various attractive programs, such as the Changjiang Scholars Program and the Thousand Talent Program, to lure expatriate Chinese-born scientists, particularly those working in the United States, back to work in China on a permanent or temporary basis.

The government’s efforts to improve science education seem to have paid off. China is now by far the world’s leader in bachelor’s degrees in science and engineering, with 1.1 million in 2010, more than four times the U.S. number. This large disparity reflects not only China’s dramatic expansion in higher education since 1999 but also the fact that a much higher percentage of Chinese students major in science and engineering, around 44% in 2010, compared to 16% in the United States. Of course, China’s population is much larger. Adjusting for population size differences, the two countries have similar proportions of young people with science and engineering bachelor’s degrees. China’s growth in the production of science and engineering doctoral degrees has been comparably dramatic, from only 10% of the U.S. total in 1993 to a level exceeding that in the United States by 18% in 2010. Of course, questions have been raised both in China and abroad about whether the quality of a Chinese doctoral degree is equivalent to that of a U.S. degree.

The impact of China’s heavy investment in scientific research is also unmistakable. Data from Thomson Reuters’ InCites and Essential Science Indicators databases indicate that China’s production of scientific articles grew at an annual rate of 15.4% between 1990 and 2011. In terms of total output, China overtook the United Kingdom in 2004, and Japan and Germany in 2005, and has since remained second only to the United States. The data also reveal that the quality of papers produced by Chinese scientists, measured by citations, has increased rapidly. China’s production of highly cited articles achieved parity with Germany and the United Kingdom around 2009 and reached a level of 31% the U.S. rate in 2011.

Four factors favor China’s rise in science: a large population and human capital base, a large diaspora of Chinese-origin scientists, a culture of academic meritocracy, and a centralized government willing to invest in science. However, China’s rise in science also faces two major challenges: a rigid, top-down administration system known for misallocating resources, and rising allegations of scientific misconduct in a system where major decisions about funding and rewards are made by bureaucrats rather than peer scientists. Given these features, Chinese science is likely to do well in research areas where research output depends on material and human resources; i.e., extensions of proven research lines rather than truly innovative advances into unchartered territories. Given China’s heavy emphasis on its economic development, priority is also placed on applied rather than basic research. These characteristics of Chinese science mean that U.S. scientists could benefit from collaborating with Chinese scientists in complementary and mutually beneficial ways. For example, U.S. scientists could design studies to be tested in well-equipped and well-staffed laboratories in China.

Science in a new world order

Science is now entering a new world order and may have changed forever. In this new world order, U.S. science will remain a leader but not in the unchallenged position of dominance it has held in the past. In the future, there will no longer be one major world center of science but multiple centers. As more scientists in countries such as China and India actively participate in research, the world of science is becoming globalized as a single world community.

A more competitive environment on the international scene today does not necessarily mean that U.S. science is in decline. Just because science is getting better in other countries, this does not mean that it’s getting worse in the United States. One can imagine U.S. science as a racecar driver, leading the pack and for the most part maintaining speed, but anxiously checking the rearview mirror as other cars gain in the background, terrified of being overtaken. Science, however, is not an auto race with a clear finish line, nor does it have only one winner. On the contrary, science has a long history as the collective enterprise of the entire human race. In most areas, scientists around the world have learned from U.S. scientists and vice versa. In some ways, U.S. science may have been too successful for its own good, as its advancements have improved the lives of people in other nations, some of which have become competitors for scientific dominance.

Hence, globalization is not necessarily a threat to the wellbeing of the United States or its scientists. As more individuals and countries participate in science, the scale of scientific work increases, leading to possibilities for accelerated advancements. World science may also benefit from fruitful collaborations of scientists in different environments and with different perspectives and areas of expertise. In today’s ever more competitive globalized science, the United States enjoys the particular advantage of having a social environment that encourages innovation, values contributions to the public good, and lives up to the ideal of equal opportunity for all. This is where the true U.S. advantage lies in the long run. This is also the reason why we should remain optimistic about U.S. science in the future.

Recommended reading

J. M. Diamond, Guns, Germs and Steel: The Fates of Human Societies (New York: W.W. Norton & Company, 1999).

Thomas Friedman, The World Is Flat: A Brief History of the Twenty-First Century (New York: Farrar, Straus, and Giroux, 2005).

Titus Galama and James R. Hosek, eds., Perspectives on U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2007).

Titus Galama and James R. Hosek, eds., U.S. Competitiveness in Science and Technology (Santa Monica, CA: RAND, 2008).

Claudia Dale Goldin and Lawrence F. Katz, The Race between Education and Technology (Cambridge, MA: Belknap Press of Harvard University Press, 2008).

Alexandra Killewald and Yu Xie, “American Science Education in its Global and Historical Contexts,” Bridge (Spring 2013): 15-23.

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future (Washington, DC: National Academies Press, 2007).

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Rapidly Approaching Category 5 (Washington, DC: National Academies Press, 2010).

Organisation for Economic Co-operation and Development, PISA 2009 Results: Executive Summary (2010); available online at www.oecd.org/pisa/pisaproducts/46619703.pdf.

Paula Stephan, How Economics Shapes Science (Cambridge, MA: Harvard University Press, 2012).

Yu Xie and Alexandra A. Killewald, Is American Science in Decline? (Cambridge, MA: Harvard University Press, 2012).


Yu Xie is Otis Dudley Duncan Distinguished University Professor of Sociology, Statistics, and Public Policy at the University of Michigan. This article is adapted from the 2013 Henry and Bryna David Lecture, which he presented at the National Academy of Sciences.


Conservatism and Climate Science

Conservatism and Climate Science

STEVEN F. HAYWARD

Conservatism and Climate Science

Objections to liberal environmental orthodoxy have less to do with the specifics of the research or the economic interests of the fossil fuel industry than with fundamental questions about hubris and democratic values.

It is not news to say that climate change has become the most protracted science and policy controversy of all time. If one dates the beginning of climate change as a top tier public issue from the Congressional hearings and media attention during the summer of 1988, shortly after which the UN Framework Convention on Climate Change was set in motion with virtually unanimous international participation, it is hard to think of another policy issue that has gone on for a generation with the arguments—and the policy strategy—essentially unchanged as if stuck in a Groundhog Day loop, and with so little progress being made relative to the goals and scale of the problem as set out. Even other areas of persistent scientific and policy controversy—such as chemical risk and genetically modified organisms—generally show some movement toward consensus or policy equilibrium out of which progress is made.

There has always been ideological and interest group division about environmental issues, but the issue of climate change has become a matter of straight partisan division, with Republicans now almost unanimously hostile to the climate science community and opposed to all proposed greenhouse gas emissions regulation. Beyond climate, Republicans have become almost wholly disengaged from the entire domain of environmental issues.

This represents a new situation. Even amidst contentious arguments in the past, major environmental legislation such as the Clean Air Act of 1990 passed with ample bipartisan majorities. Not only did the first Bush Administration engage the issue of climate change in a serious way, as recently as a decade ago leading Republicans, including two who became presidential nominees, were proposing active climate policies of various kinds (John McCain in the Senate, and Gov. Mitt Romney in Massachusetts).

It is tempting to view this divide as another casualty of the deepening partisanship occurring almost across the board in recent years, which has seen formerly routine compromises over passing budgets become fights to the death. This kind of partisan polarization is fatal to policy change in almost every area, as the protracted fight over the Affordable Care Act shows.

Yet the increasing partisan divide about nearly everything should prompt more skepticism about a popular narrative said to explain conservative resistance to engaging climate change: that conservatives—or at least the Republican political class—have become “anti-science.” As a popular book title has it, there is a Republican “war” on science, but science has little to do with the partisan divisions over issues such as health care reform, education policy, labor rules, or tax rates. And if one wants to make the politicization of science primarily a matter of partisan calculation, a full balance sheet shows numerous instances of liberals—and Democratic administrations—disregarding solid scientific findings that contradict their policy preferences, or cutting funding for certain kinds of scientific research. Examples include the way many prominent liberals exhibit blanket opposition to genetically modified organisms, some childhood vaccines, or, to pick a narrow case, how the U.S. Fish and Wildlife Service has ignored recommendations of its own science advisory board on endangered species controversies. A closer look at what drives liberal attitudes about some of these controversies will find that their reasons are similar or identical to the reasons conservatives are critical of policy-relevant science in climate and other domains—neither side is very compelled by science that contradicts strongly held views about how politics and policies ought to be carried out. In other words, the ideological argument over science today merely replicates many of the other arguments between left and right today based on long-standing philosophical premises or principles.

Drawing back to a longer time horizon, one discovers the counter-narrative reality that government funding for science research often grew faster under Republican than Democratic administrations. Ronald Reagan, for example, supported the large appropriation for the super-conducting supercollider; Bill Clinton cancelled the project for fiscal reasons. George W. Bush committed the U.S. to joining the international ITER consortium to pursue fusion energy, but the new Democratic Congress of 2007-2008 refused to appropriate the U.S. pledge.

President Obama lent some credence to the popular narrative with the brief line in his first inaugural address that “We will restore science to its rightful place.” Rather than write off this comment as a partisan shot at the outgoing Bush administration, we should take up the implicit challenge of thinking anew about what is the “rightful” place of science in a democracy. So let me step back from climate for a moment to consider some of the serious reservations or criticisms conservatives have about science generally, and especially science combined with political power. My aim here is both to help provide a fresh understanding of the sources of the current impasse, and to suggest how the outline of a conservative climate policy might come into view—albeit a policy framework that would be unacceptably weak to the environmental establishment.

Modern science and its discontents

The conservative ambivalence or hostility toward the intersection of science and policy can be broken down into three interconnected parts: theoretical, practical, and political. I begin by taking a brief tour through these three dimensions, for they help explain why appeals to scientific authority or “consensus” are guaranteed to be effective means of alienating conservatives and spurring their opposition to most climate initiatives. At the root of many controversies today, going far beyond climate change, are starkly different perspectives between left and right about the nature and meaning of reason and the place of science.

From the earliest days of the scientific revolution dating back to the Enlightenment, conservatives (and many liberals, too) were skeptical of the claims of science to superior authority based on cracking the code of complete objectivity. Keep in mind that prior to the modern scientific revolution, “science” comprised both material and immaterial aspects of reality, which is why “natural philosophy” and “moral science” were regarded as equivalent branches of human knowledge. The special, or as we might nowadays say the “privileged” dignity of the physical or natural sciences, the view that only scientific knowledge is real knowledge, was unknown. Today science is the most powerful idea in modern life, and it does not easily accommodate or respect “nonscientific” perspectives. This collective confidence can be observed most starkly in the benign condescension with which the “hard” sciences regard social science and the humanities in most universities (and the almost pathetic fervor with which some social science fields seek to show that they really are as quantitative and thus inaccessible to non-expert understanding as physics).

Even if the once grand ambition of working out a theory of complete causation for everything is no longer seriously maintained by most scientists, the original claim of scientific pre-eminence, best expressed in Francis Bacon’s famous phrase about the use of science “for the relief of man’s estate”—that is, for the exercise of control over nature—remains firmly planted. And even if we doubt that scientific completeness can ever be achieved in the real world, the residual confidence in the scientific command and control of the behavior of matter nonetheless implies that the command and control of human behavior is the legitimate domain of science.

The scientific problem deepened with the rise of social science in the 19th century, and especially the idea that what is real in the world can be cleanly separated from our beliefs about how the world should be—the infamous fact-value distinction. The conservative objection to the fact-value distinction was based not merely on the depreciation of moral argument, but more on the implied insistence that the freedom of the human mind was a primitive idea to be overcome by science. B.F. Skinner’s crude behaviorism of 50 years ago has seen the beginnings of a revival in the current interest in neuroscience (and behavioral economics), which may also portend a revival of a much more sophisticated updating of the Skinnerian vision of therapeutic government. If we really do succeed in unlocking as never before the secrets of how brain activity influences behavior, moral sentiments, and even cognition itself, will the call for active modification against “anti-social” behavior be far behind?

There might ironically be surprising agreement between environmentalists and conservatives over geoengineering, albeit for opposite reasons.

But even well short of that old prospect, one of the most basic problems of social science, from a conservative point of view (though many liberals will acknowledge this point) is that despite its claims to scientific objectivity, it cannot escape a priori “value judgments” about what questions and desired outcomes are the most salient. This turns out to be the Achilles heel of all social science, which tries to conduct itself with the same confidence and sophistication as the physical sciences, but which in the end cannot escape the fact that its enterprise is indeed “social.” We can really see this social dimension at work in the “climate enterprise”—my shorthand term for the two sides, science and policy, of the climate change problem. The climate enterprise is the largest crossroads of physical and social science ever contemplated.

The social science side of climate policy vividly displays the problem of fundamental disagreement over “normative” questions. Although we can apply rigorous economic analysis to energy forecasts and emission control pathways, the arguments over proper discount rates and the relative weight of the tradeoff between economic growth and emissions constraint cannot be resolved objectively, that is to say, scientifically. Climate action advocates are right to press the issue of intergenerational equity, but like “sustainability,” a working definition or meaningful framework for guiding policy is nearly impossible to settle. The ferocious conflicts over assessment of proposed climate policy should serve as a healthy reminder that while the traditional physical sciences can tell us what is, they cannot tell us what to do.

This is only one of the reasons why the descent from the theoretical to the practical level leads conservatives to have doubts about the reach and ambition of supposedly science-grounded policies in just about every area, let alone climate change. In environmental science and policy, environmentalists like to emphasize the interconnectedness of everything, the crude popular version of which is the “butterfly effect,” where a butterfly beating its wings in Asia results in a hurricane in the Gulf of Mexico. Conservatives don’t disagree with the interconnectedness of things. Quite the opposite; the interconnectedness of phenomena is in many ways a core conservative insight, as any reader of Edmund Burke will perceive. But drawing from Burke, conservatives doubt you can ever understand all the relevant linkages correctly or fully, and especially in the policy responses put forth that emphasize the combination of centralized knowledge with centralized power. In its highest and most serious form, this skepticism flows not from the style of monkey-trial ignorance or superstition associated with Inherit the Wind, but from the cognitive or epistemological limitations of human knowledge and action associated with philosophers like Friedrich Hayek and Karl Popper (among others), which tells us that knowledge is always partial and contingent and subject to correction, all the more so as we move from the particular and local to the general and global.

Thus, the basic practical defect of scientific administration is the “synoptic fallacy” that we can command enough information and make decisions about resources and social phenomena effectively enough to achieve our initial goals. Conservative skepticism is less about science per se than its claims to usefulness in the policy realm. This skepticism combines with the older liberal view—that is, the view that values individual freedoms above all else—that the concentration of discretionary political power required for nearly all schemes of comprehensive social or economic management are a priori suspect. Today that older liberal view is the core of political conservatism. Put more simply or directly, the conservative distrust of authority based on claims of superior scientific knowledge reflects a distrust of the motives of those who make such claims, and thus a mistrust of the validity of the claims themselves.

This practical policy difficulty might be overcome or compromised, as has happened occasionally in the past, if it weren’t for how the politics of science currently fall out today. In a sentence, the American scientific community—or at least certain vocal parts of it—is susceptible to the charge that it has become an ideological faction. Put more directly, it seems many scientists have chosen partisan sides. Some scientists are quite open about their leftward orientation. In 2004, Harvard geneticist Richard Lewontin wrote a shocking admission in the New York Review of Books: “Most scientists are, at a minimum, liberals, although it is by no means obvious why this should be so. Despite the fact that all of the molecular biologists of my acquaintance are shareholders in or advisers to biotechnology firms, the chief political controversy in the scientific community seems to be whether it is wise to vote for Ralph Nader this time.” (With political judgment this bad, is it any wonder there might be doubts about the policy prescriptions of scientists?) MIT’s Kerry Emanuel, a Republican, but as mainstream as they come in climate science (Al Gore referenced his work, and in one of his books Emanuel refers to Sen. James Inhofe as a “scientific illiterate” and climate skeptics as les refusards), offers this warning to his field: “Scientists are most effective when they provide sound, impartial advice, but their reputation for impartiality is severely compromised by the shocking lack of political diversity among American academics, who suffer from the kind of group-think that develops in cloistered cultures. Until this profound and well-documented intellectual homogeneity changes, scientists will be suspected of constituting a leftist think tank.”

This partisan tilt—real or exaggerated—among the scientific establishment aggravates a general problem that afflicts nearly all domains of policy these days, namely, the way in which policy is distorted by special interests and advocacy groups in the political process. Hence we end up with energy policies favoring politically connected insiders (such as federal loan guarantees for the now-bankrupt Solyndra solar technology company) or subsidizing technologies (currently wind, solar, and ethanol) that are radically defective or incommensurate with the scale of the climate problem they are intended to remedy. The loop-holes, exceptions, and massive sector subsidies (especially to coal) of the Waxman-Markey cap-and-trade bill of 2009 rendered the bill a farce even on its own modest terms and should have appalled liberals and environmentalists as much as conservatives.

Here the political naiveté of scientists does their cause a disservice with everyone; the energy policy of both political parties since the first energy shocks of the 1970s has been essentially a frivolous farce of special interest favoritism and wishful thinking, with little coherence and even less long-term care for the kind of genuine energy innovation necessary to address prospective climate change on the extreme range of the long-run projections.

Is “conservative climate policy” an oxymoron?

To be sure, few or no Republican office holders are able to articulate this outlook with deep intellectual coherence, but then neither are most liberals capable of expressing their zealous egalitarian sentiments with the rigor of, say, John Rawls’ Theory of Justice. And this should not excuse the near complete Republican negligence on the whole range of environmental issues. But even if social psychologist Jonathan Haidt is correct (and I think he is) that liberals and conservatives emotionally perceive and respond to issues from deep-seated instincts rather than carefully reasoned dialectics, the divisions among us are susceptible to some rational understanding. Can the fundamental differences be harmonized or compromised?

The first point to grasp is that conservatives—or least the currently dominant libertarian strain of the right—ironically have a more open-ended outlook toward the future than contemporary liberals. The point here is not to sneak in climate skepticism, but policy skepticism, as the future is certain to unfold in unforeseen ways, with seemingly spontaneous and disruptive changes occurring outside the view or prior command of our political class. One current example is the fracking revolution in natural gas, which is significantly responsible for U.S. per capita carbon dioxide emissions falling to their lowest level in nearly 20 years. No one, including the gas industry itself, foresaw this coming even as recently as a decade ago. (And if the political class in Washington had seen it coming, it likely would have tried to stop it; many environmentalists are deeply ambivalent about fracking at the moment.) And one key point is that the fracking revolution occurred overwhelmingly in the absence of any national policy prescription. The bad news, from a conventional environmental point of view, is the fracking revolution, now extending to oil, is just beginning. It has decades to run, in more and more places around the world. This means the age of oil and gas is a long way from being over, and this is going to be true even in a prospective regime of rising carbon taxes. (The story is likely to be much the same for coal.)

More broadly, however, it is not necessary to be any kind of climate skeptic to be highly critical of the narrow, dreamlike quality the entire issue took on from its earliest moments. Future historians are likely to regard as a great myopic mistake the collective decision to treat climate change as more or less a large version of traditional air pollution, to be attacked with the typical emissions control policies—sort of a global version of the Clean Air Act. Likewise the diplomatic framework, a cross between arms control, trade liberalization, and the successful Montreal Protocol, was poorly suited to climate change and destined the Kyoto Protocol model to certain failure from the outset. If one were of a paranoid or conspiratorial state of mind, you might almost wonder if the first Bush Administration committed the U.S. to this framework precisely as a way of assuring it would be self-defeating. (I doubt they were that clever or devious.) There have been a few lonely voices that have recognized these defects while still arguing in favor of action, such as Gwyn Prins of the London School of Economics and Steve Rayner of Oxford University. Two years before the failure of the 2009 Copenhagen talks, Prins and Rayner argued in Nature magazine that we should ditch the “top-down universalism” of the Kyoto approach in favor of a decentralized approach that resembles American federalism.

If ever there was an issue that required patient and fresh thinking, it was climate change 25 years ago. The modern world, especially those still billions of people striving to escape energy poverty, demands abundant amounts of cheap energy, and no amount of wishful thinking (or government subsidies or mandates) will change this. The right conceptual understanding of the problem is that we need large-scale low- and non-carbon energy sources that are cheaper than hydrocarbon energy. Unfortunately, no one knows how to do this. No one seems to know how to solve immigration, poor results from public education, or the problem of generating faster economic growth either, but we haven’t locked ourselves into a single policy framework that one must either be for or against in the same way that we have done for climate policy. Environmentalists and policy makers alike crave certainty about the policy results ahead of us, and an emphasis on innovation, even when stripped of the technological fetishes and wishful thinking that has plagued much of our energy R&D investments, cannot provide any degree of certainty about paths and rates of progress. But it was a fatally poor choice to emphasize, almost to the exclusion of any other frameworks, a policy framework based on making conventional hydrocarbon energy, upon which the world depends utterly for its well-being, more expensive and artificially scarce. This might make some emissions headway in rich industrial nations, although it hasn’t in most of them, but won’t get far in the poorer nations of the world. Subsidizing expensive renewable energy is a self-defeating mugs game, as many European nations are currently recognizing.

While we stumble along trying to find breakthrough energy technologies with a low likelihood of success in the near and intermediate term, a more primary conservative orientation comes into view. The best framework for addressing large-scale disruptions from any cause or combination of causes is building adaptive resiliency. Too often this concept gets reduced to the defeatist concept of building seawalls, moving north, and installing more air conditioners. But humankind faces disasters and chronic calamities of many kinds and causes; think of droughts, which through history have been a scourge of civilizations. Perhaps it is grandiose or simplistic to say that the whole human story is one of gradually increasing adaptive resiliency. On the other hand, what was the European exploration and settlement of North America but an exercise in adaptive resiliency? This opens into one of the chief conservative concerns over climate change and many other problems: the pessimism that becomes a self-fulfilling prophecy. As the British historian Thomas Macaulay wrote in 1830, “On what principle is it that, when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?” The 20th century saw global civilization overcome two near-apocalyptic wars and numerous murderous regimes (not all entirely overcome today), and endure 40 years of nuclear brinksmanship that threatened a nuclear holocaust in 30 minutes. To suggest human beings can’t cope with slow moving climate change is astonishingly pessimistic, and the relentless soundings of the apocalypse have done more to undermine public interest in the issue than the efforts of the skeptical community.

One caveat here is the specter of a sudden “tipping point” leading to a rapid shift in climate conditions, perhaps over a period of mere decades. To be sure, our capacity to respond to sudden tipping points is doubtful; consider the problematic reaction to the tipping point of September 11, 2001, or the geopolitical paroxysms induced by the tipping point reached in July 1945 in Alamogordo, New Mexico. The climate community would be correct to object that the open-ended and uncertain orientation I have sketched here would likely be adequate for preparing for such a sudden change—but then again neither was the Kyoto Protocol approach that they so avidly supported.

Right now the fallback position for a tipping point scenario is geoengineering, or solar radiation management. There might ironically be surprising agreement between environmentalists and conservatives over geoengineering, albeit for opposite reasons that illustrate the central division outlined above. Liberal environmentalists tend to dislike geoengineering proposals partly for plausible philosophical reasons—humans shouldn’t be experimenting with the globe’s atmospheric system any more than we already are—and partly because of their abiding dislike of hydrocarbon energy that geoengineering would further enable. Environmentalists have compared geoengineering to providing methadone to a heroin addict, though the “oil addiction” metaphor, popular with both political parties (former oil man George W. Bush used it) is truly risible. We are also addicted to food, and to having a roof over our heads. But conservatives tend to be skeptical or opposed to geoengineering for the epistemological reasons alluded to above: the uncertainties involved with the global-scale intervention are unlikely to be known adequately enough to assure a positive outcome. Geoengineering may yet emerge as a climate adaptation tool out of emergency necessity, but it will be over the strong misgivings of both left and right alike. This shared hesitation might ironically make it possible for research on geoengineering to proceed with a lower level of distrust.

President Obama’s recent call for a new billion-dollar climate change fund aimed at research on adaptation and resiliency appears in general terms close to what I hint at here. Whether a billion dollars is a suitable amount (rather, it seems the opening bid for any spending initiative in Washington these days) or whether the fund would be spent sensibly rather than politically is an important but second order question.

The final difference between liberals and conservatives over climate change that is essential to grasp is wholly political in the high and low sense of the term. Some prominent environmentalists, and fellow travelers like New York Times columnist Thomas Friedman, periodically express open admiration for authoritarian power to resolve climate change and other problems for which democratic governments are proving resistant precisely because of their responsiveness to public opinion—what used to be understood and celebrated as “consent of the governed.” A few environmental advocates have gone as far as to say that democracy itself should be sacrificed to the urgency of solving the climate crisis, apparently oblivious to the fact that appeals to necessity in the face of external threats have been the tyrant’s primary self-justification since the beginning of conscious human politics, and seldom ends well for the tyrant and the people alike. For example, Mayer Hillman, a senior fellow at Britain’s Policy Studies Institute and author of How We Can Save the Planet, told a reporter some time back that “When the chips are down I think democracy is a less important goal than is the protection of the planet from the death of life, the end of life on it. This [resource rationing] has got to be imposed on people whether they like it or not.” Similar sentiments are found in the book The Climate Change Challenge and the Failure of Democracy by Australians David Shearman and Joseph Wayne Smith. One of the authors (Shearman) argued that “Liberal democracy is sweet and addictive and indeed in the most extreme case, the USA, unbridled individual liberty overwhelms many of the collective needs of the citizens… There must be open minds to look critically at liberal democracy. Reform must involve the adoption of structures to act quickly regardless of some perceived liberties.”

I can think of no other species of argument more certain to provoke enthusiasm for Second Amendment rights than this. The unfortunate drift toward anti-democratic authoritarianism flows partly from frustration but also from the success the environmental community has enjoyed through litigation and a regulatory process that often skirts democratic accountability—sometimes with decent reason, sometimes not. But this kind of aggrandized hallucination of the virtues of power will prove debilitating as the scope and scale of an environmental problem like climate change enlarges. I can appreciate that many climate action advocates will find much of what I’ve said here to be inadequate, but above all liberals and environmentalists would do well to take on board the categorical imperative of climate policy from a conservative point of view, namely, that whatever policies are developed, they must be compatible with individual liberty and democratic institutions, and cannot rely on coercive or unaccountable bureaucratic administration.


Steven F. Hayward is the inaugural visiting scholar in conservative thought and policy at the University of Colorado at Boulder, and the Ronald Reagan Distinguished Visiting Professor at Pepperdine University’s Graduate School of Public Policy.


A Survival Plan for the Wild Cyborg

A Survival Plan for the Wild Cyborg

RINIE VAN EST

A Survival Plan for the Wild Cyborg

In order to stay human in the current intimate technological revolution, we must become high-tech people with quirky characters. Here are seven theses to nail to the door of our technological church.

Today, the most exciting discoveries and technological developments have to do with us, we humans. Technology settles itself rapidly around and within us; collects more and more data about us; and increasingly is able to simulate human appearances and behaviour. As our relationship with technology is becoming more and more intimate, we are becoming techno-people or cyborgs. On the one hand, intimate technology offers opportunities for personal development and more control over our lives. On the other hand, governments, businesses, and other citizens may also deploy intimate technologies in order to influence or even coerce us. To put this development on the public and political agenda, the Rathenau Instituut in the Netherlands has coined the term “intimate-technological revolution,” which is partly driven by smartphones, social media, sensor networks, robotics, virtual worlds, and big data analysis. We describe this revolution in our report Intimate Technology: The Battle for Our Body and Behavior.

The fact that our selves are becoming increasingly intertwined with technology is illustrated by the ever-shrinking computer: from desktop to laptop, then tablet to mobile phone, and soon to e-glasses, and possibly in the long term to contact lenses. This shift from the table to lap, from hand to nose and even eye, shows us how technology creeps into us. For the time being, the demarcation line is typically just on the outside, but a variety of implantable devices—for example, cochlear implants for the deaf and deep brain stimulation electrodes for treating Parkinson’s disease and severely depressed patients—are already positioned inside the body.

Through our smartphone, smart shoes, sports watches, and life-logging cameras, we constantly inform the outside world about ourselves: obviously where we are through global positioning systems, but also what we are thinking and doing through social media. Once considered entirely private, that information is now accessible for literally the whole world to know. To some extent we now maintain our most intimate relationships by digital means. Social media are enabling new forms of relationships, from long-term and stable to short and volatile. And then there are phone apps that help us try to achieve our good intentions, such as exercising more or eating fewer sweets. They behave like compassionate but strict coaches, monitoring our metabolism and massaging our psyche.

The convergence of nanotechnology, biotechnology, information technology, and cognitive science increasingly turns biology into technology, and technology into biology. The convergence takes on three concrete forms. First, we are more and more like machines, and can thus be taken apart for maintenance and repair work and can perhaps even be upgraded or otherwise improved. Second, our interactions with one another are changing, precisely because machines are increasingly nestling into our private and social lives. And third, machines are becoming more and more humanlike, or at least engineers do their best to build in human traits, so that these machines seem to be social and emotional, and perhaps even moral and loving.

Mechanistic views of nature and mind have existed for centuries, but only recently has technology actually gained much control over our bodies and minds.

This development raises several fundamental questions: How close and intimate can technology become? At what point is technology still nicely intimate, and when does it become intimidating? Where do we have to set boundaries?

Mechanistic views of nature and mind have existed for centuries, but only recently has technology actually gained much control over our bodies and minds. Our hips and knees are now replaceable parts. Deafness, balance disorders, depression, anxiety trauma, heart irregularities, and innumerable other maladies have become, through the use of implants and pills, machine maintenance and performance problems. And now we begin to move to never-before-seen performance levels such as the eyeborg, the implant used by the colorblind artist Neil Harbisson to transform every hue into audible sound, with the result that he now hears colors, even the infrared and ultraviolet normally invisible to us.

The idea of intimacy used to pertain to matters of our body and mind that we would share only with people who were close to us: our immediate family members and true friends. We shared our personal intimacies by talking face to face, later with remote communication by writing letters, and then by telephone. The increasing role of technology for broadcasting information destabilizes this traditional and simple definition of what is “intimate.” On the social network Lulu, female students share their experiences about their ex-boyfriends; the geo-social network Foursquare allows users to announce their exact location in real time to all of their friends.

Consumer apps are coming that will recognize faces, analyze emotions, and link this data to our LinkedIn and Facebook profiles. When wearing Google Glasses, we ourselves will be as transparent as glass, for other computer eyeglass wearers can see who we are, what we do, who our friends are, and how we feel. Information about other people will be omnipresent, and so will information about other items in our environment. In a certain radius around Starbucks outlets, computer eyeglass wearers will receive alerts about the specialty coffee of the week, or tea if that’s their preference.

In other intimate interactions, technology has a growing intensity. Equipment has been produced that can enable parents at home to “tele-hug” a premature newborn in a hospital incubator. One in 10 love relationships now starts through online dating, and for casual sexual encounters, there are Web sites such as Second Love. On the aggressive side of the intimate interaction spectrum are the military drones used by U.S. forces that, having identified and tracked you, can now kill you.

Applications and devices fulfil an increasing number of roles that had been traditionally reserved for human beings. E-coaches encourage us to do more exercise, to conserve energy, or not to be too aggressive while e-mailing. Marketing psychologists no longer directly observe how people respond to advertising, but rather use emotion-recognition software, which is less expensive and more accurate. My son does not only play soccer against his friends, but also against digital heroes like Messi and Van Persie. Digital characters can appear very humanlike. When you kill someone in the latest-generation first-person shooter games (where you view the onscreen action as if through the eyes of a person with a gun), the suffering of the avatar that you’ve just offed is so palpable that you can feel genuine remorse. Meanwhile, the international children’s aid organization Terre des Hommes put Webcam child sex tourism on the public and political agenda in many Western countries by using an online avatar named Sweetie, a virtual 10-year-old girl, to ensnare more than 1,000 online pedophiles in 65 countries. And then there’s Roxxxy, the female-shaped sex robot with a throbbing heart and five adjustable behavioral styles. And for those who find that too impersonal, we can have remote coitus with our own beloved using synthetic genitals that are connected online.

Our devices are also gaining more autonomy. Perhaps they will soon demand it? If we neatly time and plan our meetings in our calendar, Google Now automatically searches available travel routes and gives us a call when the departure time is approaching. We are still waiting for the digital assistant who dutifully worries and asks us whether we’re not too tired for such a late appointment, but an Outlook calendar on your iPhone can warn you that you have a very busy day ahead. Real driverless cars have already travelled thousands of kilometers on public roads in California and in Berlin’s city center. Eventually the U.S. military wants to build drones that can independently make the decision to kill.

We are the new resource

These changes are a new step in the information revolution, when information technology is emerging as intimate technology. Whereas the raw materials of the Industrial Revolution were cotton, coal, and iron ore, the raw material of the intimate technological revolution is us. Our bodies, thoughts, feelings, preferences, conversations, and whereabouts are inputs for intimate technology.

When the Industrial Revolution steamrolled over England and then throughout Europe and the United States, it created enormous havoc with two factors of production: labor and land. The result was social and political paroxysms accompanied by enormous cost and suffering. Those of us who now enjoy the prosperity and material comforts made possible through industrial transformation might judge the pain a reasonable sacrifice for the benefits, but we would be wise to remember how much pain there was and how we might be affected by a similarly convulsive transformation. Now that the Information Revolution thunders even faster over us, keep in mind that we and our children are the most important production factor, that our intimate body and mind are the raw material for new enterprises and capital. And as with the old Industrial Revolution, this one can destabilize the institutions and social arrangements that hold our world together. At stake here are the core attributes of our intimate world, on which our social, political, and economic worlds are built: our individual freedom, our trust in one another, our capacity for good judgment, our ability to choose what we want to focus our attention on. Unless we want to discover what a world without those intimate attributes is going to be like, it is vital that we develop the moral principles to steer the new intimate technological revolution, to lead it in humane ways and divert it from dehumanizing abuses. That is our moral responsibility.

The insight that a technological revolution is turning our intimate lives inside out in ways that demand a moral response is not yet common. But unless such awareness grows quickly, there will be no debate and no policy. And without debate and policy, we the people are at the mercy of the whims and visions not only of the technology creators, the profit makers, and government and security services, but also of the emergent logic of the technological systems themselves, which may have little to do with what their creators intend.

To start this very necessary and overdue debate, I suggest this proposition: Let us accept that we are becoming cyborgs and welcome cyborgian developments that can give us more control over our own lives. But acceptance of a cyborg future does not equal blind embrace. Thoughtlessly embracing all current developments will turn us into good-natured high-tech puppets, apparently happy as we pursue our perfect selves but gradually losing our autonomy. This is the path to a world made not for the difficult strivings of democracy and civil society, but for the perfectly efficient functioning of the marketplace and the security state.

Seven ways to become wild cyborgs

Children and adults need to retain a healthy degree of wildness, cockiness, playfulness, and sometimes annoying idiosyncrasy. We should aspire to be wild cyborgs. The challenge will be to apply intimate technology in such a way that we become human cyborgs. I propose that we adhere to the following seven theses as a guide to our interactions with technology.

1 Without privacy we are nothing. Our data should therefore belong to us. Without privacy we cannot be free, because we cannot choose to act without our choices and our actions being known and thus subject to unseen influence and reaction. Data about our actions and decisions are continuously captured and funnelled by commercial companies, state authorities, and fellow citizens. The large data owners say that’s not bad, and many users just parrot those words because they “have nothing to hide.” But if that is true, why do these same people lock their front door and not talk publicly about their credit card security code? Too much privacy has been lost in recent years. Many people have unwittingly donated their social data to big companies in return for social media services. It is high time that we swap our childlike acceptance of our loss of privacy and autonomy for a strong adult resistance. That implies consciously dealing with the ownership of our personal data, because they are of great economic, personal, and public value.

Over the coming years, the way we will deal with our biological data provides the litmus test for whether we will be able to keep alive the concept of privacy and ensure that our physical and mental integrity are vouchsafed. The first signs are discouraging. Millions of people have already started to hand out their biological data for free to all kinds of companies. Via sensors built into consumer products such as smartphones and exercise monitors, massive amounts of biological data such as fingerprints (for example, to unlock your iPhone 5s), heart rate, emotions, sleep patterns, and sexual activity can be collected. The Advanced Telecom Research Institute (ATR) showed that wrist bands with accelerometers can be used to track more than a hundred specific actions, such as washing hands or giving an injection, such as performed by nurses, for example. Data on how we walk, collected by smart shoes, can be used to identify us, track our health, and even reveal early signs of dementia. We should rapidly become aware of the richness and potential sensitivity of our biological data, in particular in combination with social data. State security services are very interested in that type of information, and so are companies that want to market their products to you or to make decisions about your eligibility for credit, employment, or insurance. The way we handle the privacy of ourselves and others over the next five years will be decisive for how much privacy future our generations will have. We should realize that by abdicating privacy we will lose our freedom.

2 We must be aware of who is presenting information to us and why. Freedom of choice has always been a central value of both the market economy and democracy. Personalization of the supply of information is putting our online freedom of choice—and we are always online—under pressure. With every click or search, we donate to the Internet service providers information about who we are and what we do. That type of information is used to build up individual user profiles, which in turn allow the providers to continually improve their ability to persuade us to do what is in their commercial or political interest and to tailor such persuasive power for each individual. What makes such propaganda and advertisement different from what we have faced up to now is that it is ubiquitous and often invisible. It can also be covertly prescriptive, pushing us to make certain choices, for example with devices that are getting better and better at mimicking human speech, faces, and behaviors to seduce and fool us. For example, psychology experiments suggest that we are particularly open to persuasion by people who look like us. Digital images of one’s own face can now be mixed, or “morphed,” with a second face from an online advertisement in ways that are not consciously discernible but still increase one’s susceptibility to persuasion. So to protect our freedom of choice we have to be aware of the interests at stake and who benefits when we make the choices we are encouraged to make. We should therefore demand that the organizations behind the devices be transparent about the way our information supply is programmed and how the software and interfaces are being used to influence us. Precedent for how to organize this might be found in health care regulations, which require that medicines be accompanied by information on side effects and that doctors base their actions on the informed consent of their patients. Maybe every app should also have an online information sheet that addresses questions such as: How is this piece of software trying to influence its users? Which algorithms are used, and how are they supposed to work?

3 We must be alert to the right of every person to freely make choices about their lives and ambitions. Individualism forms the foundation of our liberal democratic societies. So to a large extent, it should be up to individuals to choose how to employ intimate technologies for pursuing their aspirations. This position is strongly advocated by groups such as transhumanists, bio-hackers, and quantified “selfers” that promote self-emancipation through technology. But there is no such thing as self-realization untouched by mass media, the market, public opinion, and science and technology. What image of our self are we trying to become, and where does that image come from? Many markets thrive on a popular culture that challenges normal people to become perfect, whatever that means.

We are losing the ability to just be ourselves. As more technical means become available to enhance our outward appearance and physical and mental performance—our wrinkleless skin, our rippling abs, our flamboyant sex life, our laserlike concentration—firms will pursue more effective ways to seduce us to strive for a perfection that they define. Having agreed to let marketers tell us how to dress and wear our hair, are we content to have them define how we ought to shape our bodies and minds? Are we really realizing ourselves if we strive to become “perfect” in the image created by marketers? We need to protect the right to be simultaneously very special and very common, without which we may lose the capacity to accept ourselves and others for what we are. We should cherish our human ambition to strive for our own version of perfection, and also nourish ways to accept our human imperfections.

To stay human we have to keep our social and emotional skills, including our ability to trust in people, at a high level.

4 The acts of loving, parenting, caring for, and killing must remain the strict monopoly of real people. The history of industrial advance has also been a history of machine labor replacing human labor. This history has often been to our benefit, as drudgery and danger have been shifted from humans to machines. But as machines acquire more and more human characteristics, both physical (such as realistic avatars and robots) and mental (such as social and emotional skills), we must collectively start addressing the question of whether all the kinds of human activities that could be outsourced to the machine should be outsourced. I believe we should not outsource to machines certain essential human actions, such as killing, marriage, love, and care for children and the sick. Doing so might provide wonderful examples of human ingenuity, but also the perfect formula for our dehumanization, and thus for a future of loneliness. Autonomous drone killing might be possible someday. But because a machine can never be held accountable, we should ensure that decisions on life and death must always be taken by a human being. As humans we are shaped in our intimate relationships with other humans. For example, caring for others helps us to grow by teaching us empathy for those who need care and the value of sacrifice in our lives. If we start to outsource caring on a large part scale to technology, we run the danger of losing a big of the best part of our humanity.

5 We need to keep our social and emotional skills at a high level. Use it or lose it. We all know that if we don’t exercise our physical body we will lose strength and stamina. This is also true for our social and emotional skills, which are developed and maintained through interaction with other people. We are now entering a stage in which technology is taking on a more active role in the way we interact, measuring our emotions and giving us advice about how to communicate with others. In her book Alone Together: Why We Expect More from Technology and Less from Each Other, Sherry Turkle, who for decades has been studying the relationships between people and technologies, argues that the frequent use of information technology by young people is already lessening their social skills. Her fear is that our expectations of other people will gradually decrease, as will our need for true friendship and physical encounters with fellow humans.

There are plenty of signs that we should at least take her warnings seriously. What is at stake is our ability to trust our fellow humans. Our belief that someone is reliable, good, and capable is at the core of the most rewarding relationships we have with another human being. Technology can easily undermine our trust in people; think about emotion meters that check the “true” feelings of your partner, or life-logging technology to check whether what someone is telling you is really true. To stay human we have to keep our social and emotional skills, including our ability to have trust in people, at a high level. If we don’t do that, we run the risk that face-to-face communication may become too intimate an adventure and that our trust in other people will be defined and determined by technology.

6 We have the right not to be measured, analyzed, and coached. There is great value in learning things the hard way, by trial and error. In order to be able to gain new perspectives on life, people need to be given the opportunity to make their own, sometimes stupid and painful, mistakes. New information technologies, from smart toothbrushes and Facebook to digital child dossiers and location tracking apps, provide ample opportunities for parents to track the behavior and whereabouts of their children. But by doing so, they deprive their children of the freedom that helps them to develop into independent adults, with all the ups and downs that go with it. Can a child develop in a healthy moral and psychological way if she knows she is continuously spied upon? Does the digital storage of all our “failures” put in danger the right that we must have to make mistakes? The ability to wipe the slate clean, to forgive ourselves and to be forgiven, to learn and move on, is an important condition for our emotional, intellectual, and moral development. This digital age forces us to ask ourselves how to ensure that we preserve the capacity to forget and to be forgiven.

The more general question is whether it will remain possible to stay out of the cybernetic loop of being continuously measured, analyzed, evaluated, and confronted with feedback. Driven by technology and legitimated by fear of terrorism, the reach of the surveillance state has expanded tremendously over the past decade. At the same time, a big-data business culture has developed in which industry takes for granted, in the name of efficiency and customer convenience, that people can be treated as data resources. This culture flourishes in the virtual world, where Internet service providers and game developers have grown accustomed to following every user’s real-time Web behavior. And as with the Internet, shopkeepers can monitor the behaviour of the customers in their physical shops through wifi tracking. Samsung is monitoring our viewing habits via their smart televisions. And if we start to use computer glasses, Samsung and Google may even monitor at whom and what we glance.

The state is surveying its citizens, companies are surveying their customers, citizens survey each other, and parents and schools use all the means available to survey children. Such a surveillance society is built on fear and mistrust and treats people as objects that can and must be controlled. To safeguard our autonomy and freedom of choice, we should strive for the right not to be measured, analyzed, or coached.

7 We must nurture our most precious possession, our focus of attention. Economics tells us that as human attention becomes an increasingly scarce commodity, the commercial battle for our attention will continue to intensify. Today, “real time is the new prime time,” as we incessantly check our email and texts and equally incessantly send out data for others to check. Many new communications media divert our attention away from everyday reality and toward a commercial environment in which each content provider attempts to optimally monopolize our focus. On the Internet, we all have become familiar with commercial ads that are tailored to our preferences. In the near future, smartphones, watches, eyewear, businesses, and a growing circle of digital contacts will each demand more and more of our attention during everyday activities such as shopping, cooking, or running on the beach. And since attention is a scarce resource, paying attention to one thing will come at the expense of our attention to other things. Descartes articulated our essence, “I think therefore I am,” and the digital age forces us to protect our freedom from continual intrusion and interruption, to guard our own unpolluted thoughts, our capacity to reflect on things in our own way, because that is what we really are. We must cherish what is perhaps our most precious possession, the determinant of our individual identities: our ability to decide what to think about or just to daydream.

The intimate technological revolution will remake us by using as raw material data on our metabolism, our communications, our whereabouts, and our preferences. It will provide many wonderful opportunities for personal and social development. Think of serious games for overcoming the fear of flying, treating schizophrenia, or reducing our energy consumption. But the hybridization of ourselves and our technologies, and the political and economic struggle around this process, threaten to destabilize some qualities of our intimate lives that are also among the core foundations of our civil and moral society: freedom, trust, empathy, forgiveness, forgetting, attention. Perhaps there will be a future world where these qualities are not so important, but it will be unlike our world, and from the perspective of our world it is hard to see what might be left of our humanity. I offer the above seven propositions as a good starting point to further discuss and develop the wisdom that we will need to stay human by becoming wild cyborgs in the 21st century.


Rinie van Est () is coordinator of technology assessment at the Rathenau Instituut in the Netherlands.


Court Sides with Whales

The United Nations’ highest court has halted Japan’s large “research whaling” program in the Southern Ocean off Antarctica. But the decision will not stop all whaling by Japan or several other countries, and creating a “whale conservation market” that sells sustainable “whale shares,” as described in Issues, may provide an effective alternative to legal or regulatory mechanisms to protect global whale populations.

Promoting Free Internet Speech

Speaking during a visit to Beijing, Michelle Obama declared that freedom of speech, particularly on the Internet and in the news media, provides the foundation for a vibrant society. Striking a similar theme in Issues, Hilary Rodham Clinton, then the U.S. Secretary of State, said that protecting open communication—online and offline—is essential to ensuring the fundamental rights and freedoms of people everywhere.

Protecting the Unwanted Fish

The conservation group Oceana has released a new report detailing how “bycatch” is damaging the health of U.S. fisheries. Ecologist and writer Carl Safina has examined this and related problems in Issues, calling for a new era of fisheries management that will beef-up old tools and adopt an array of new “smart tools” to protect these valuable and threatened resources.

Alternate Routes to Career Success

Education expert Michael J. Petrilli argues in the online magazine Slate that many students would be best served not by focusing them on pursing a traditional college education but rather by providing them with sound early education followed by programs in high school and at community colleges that help them develop strong technical and interpersonal skills. Issues has examined various ways of structuring such alternative routes to the middle class, including the expansion of occupational certificate and apprenticeship programs.

Progress in Childhood Obesity, yet Challenges Remain

A major new federal health survey has reported a 43% drop in the obesity rate among young children over the past decade, but older children and adolescents have made little or no progress. In Issues, Jeffrey P. Koplan and colleagues presented lessons from an earlier groundbreaking study by the Institute of Medicine on what the nation should be doing to address this epidemic and its higher risks for serious disease.

Pitbull Promotes Education

Along with making school attendance compulsory, states and cities should develop programs to keep students—especially those at risk of absenteeism and poor performance—engaged in learning from elementary grades through high school graduation, two education experts have noted in Issues. In an innovative application of this spirit, the pop star Pitbull is supporting a charter school in Miami that engages students by drawing its lessons in all subjects, including science and math, from the world of sports.

Immigration and the Economy

The financial services company Standard & Poor’s has recently released a report suggesting that increasing the number of visas issued to immigrants with technical skills will boost the U.S. economy and even spur job growth for native-born workers. Several Issues articles have made similar cases (here and here), but an expert in labor markets has also argued that the nation is producing more than enough quality workers in scientific and engineering fields—and policymakers and industry leaders should proceed accordingly.

Leveling the Playing Field for Women in Science

Issues has explored the status of women in science from several angles, including in an examination of how to plug the leaks of both women and men in the scientific workforce, and in a personal essay about the choices women often face when confronting the “system” of science. Many of these and other ideas are explored in The Chronicle of Higher Education by Mary Ann Mason, co-author of the recently published book Do Babies Matter? Gender and Family in the Ivory Tower.