Archives – Fall 2006

Birdwing Butterfly from Papua New Guinea

This image of a birdwing butterfly, included in the National Academies’ permanent art collection, is part of Rosamond Purcell’s larger body of work in which she photographed specimens at natural history museums around the world. Sculptor, photographer, installation artist, and curator, Purcell has worked in many capacities, from collage artist to collaborator with the late paleontologist and science historian Stephen Jay Gould. Purcell’s photos have appeared in major publications throughout the world, and in numerous exhibitions at museums including the Metropolitan Museum of Art. She lives outside of Boston, Massachusetts.

With a wingspan of up to one foot, Queen Alexandra’s birdwing of Papua New Guinea is the largest butterfly in the world. This brilliantly-colored species is very rare and is usually seen only in museums and collector’s cabinets. Although protected by law, birdwings are continually threatened as their rainforest habitat is destroyed to make way for plantations.

Rosamond Purcell’s latest book, Bookworm (Quantuck Lane Press distributed by W. W. Norton) is scheduled for release in October of 2006. In celebration of the new publication, Kathleen Ewing Gallery will feature an exhibition of Purcell’s latest work, September 8 to October 28, 2006 in Washington, D.C. For more information, please visit www.kathleenewinggallery.com/exhibition_schedule.html.

From tubes to chips

This is an important book that is likely to be cited in all future work exploring both the origins of the semiconductor industry and the economic and industrial development of the Silicon Valley area in Northern California. Its key contribution is a detailed and rigorously documented examination of the linkages between the infrastructure of the vacuum tube industry that emerged in Northern California over 1920-1950 and the later development of a semiconductor industry in Silicon Valley. Chapters in the book cover the pre-1970s history of Litton Industries, Varian Associates, and Fairchild Semiconductor, with occasional digressions on the early history of semiconductor firms Amelco/Teledyne, Signetics, Intersil, National Semiconductor, and Intel.

What this book does best is to show how the 1920s San Francisco Bay Area radio manufacturing and amateur radio communities provided the foundation for the wave of innovative startup companies involved in high-performance vacuum tube manufacturing. These Bay Area companies (Eitel-McCullough, Litton Engineering, ICE, Heintz and Kaufman, and Federal Telegraph) grew greatly during World War II, propelled by a continuous flow of military procurement contracts, and the invention and mass production of radars, jammers, and communications gear generating the demand for their tubes. Other electronics or radio firms— including Sperry Gyroscope, Philco, and ITT—had purchased local players or set up Bay Area operations to tap into the region’s growing knowledge base before the war. Stanford University also played an important supporting role in these developments.

After the war, with a transition to a peacetime economy, only the companies that continued to innovate and interest military users in new tubes for military applications—notably Litton, Eitel-McCullough, and Varian Associates (whose founders left Sperry Gyroscope)—prospered. With the Korean War and the Cold War, business was soon booming again. East Coast firms such as Sylvania and GE as well as local start-up Watkins-Johnson established Bay Area operations and participated in the successful pursuit of high-tech military tube markets. Before there ever was a Silicon Valley,there was a “Vacuum Valley,” and Lecuyer shows persuasively how the materials and equipment infrastructure developed for vacuum tube manufacturing became the foundation for semiconductor manufacturing in the 1950s and 1960s.

The book provides the most complete telling to date of the birth of Fairchild Semiconductor, in many respects the poster child for the growth of today’s Silicon Valley in the late 1950s. Lecuyer argues that the restructuring of defense procurement in the McNamara Department of Defense of the early 1960s, with its emphasis on budget cuts, increasing competition, and reducing sole-source contracts, led to huge changes in focus in what was quickly becoming Silicon Valley. Varian Associates merged with Eitel-McCullough and diversified out of defense and into businesses such as TV transmission equipment, vacuum equipment, and scientific and medical instrumentation. Fairchild focused on developing mass-production techniques to slash the cost of its silicon transistors and promoting their use in innovative commercial applications.

Even though the integrated circuit (IC) was invented at Fairchild (and independently by Texas Instruments at the same time), the company was slow to capitalize on it commercially. As a result, a number of Fairchild technical staff left to set up companies such as Amelco/Teledyne, Signetics, and GME. Not until 1964 did Fairchild begin to produce volume IC products, and to cope with its come-from-behind position, it consciously opted to “dump” versions of Signetics designs on the market at half the price charged by Signetics. The strategy was predicated on the idea that Fairchild’s costs would eventually come down below its selling price and that the cut rate prices would greatly expand the commercial market for these products. It was also intended to send a signal about the punishment of defectors and to hurt Signetics financially. All of these things came to pass. The supporting effort to expand commercial markets led to Fairchild’s Gordon Moore writing an article in the trade publication Electronics in 1965, containing a famous prediction of rapidly improving semiconductor performance that became known as Moore’s Law.

In the late 1960s, Moore and others left Fairchild to found Intel, and others left Fairchild to take over East Coast transistor-maker National Semiconductor and make it over into a top Bay Area IC firm. Fueled by stock options, the recurring pattern of innovation, defection, and spinoff of a new startup became the canonical U.S. high-tech growth pattern. Lecuyer ends the book with the argument that the networks of expertise and relationships going back to the vacuum tube business were critical to the development of what became Silicon Valley, that the new silicon infrastructure built on the old vacuum tech base, that taking innovative new products and creating commercial demand for them became a critical ingredient of the new business model, and that the mobility–stock options–spinoff cycle, coupled with an easy flow of know-how and people across corporate boundaries within a region, became central to the new model for high-tech–based economic growth.

The archival research is impeccable, and the book is destined to become an oft-cited resource for those interested in the semiconductor industry. Yet the semiconductor chapters in this book are not without some serious flaws. They omit critical events as well as major players who, although outside the geographical boundaries of Silicon Valley, nonetheless had a major effect on its evolution. Firms such as Texas Instruments, Motorola, IBM, and AT&T Bell Laboratories are referred to only in passing, at best. The actions and policies of these firms in such areas as investments in R&D and licensing of intellectual property had a major role in shaping the development of companies such as Fairchild and the environment in Silicon Valley.

Another omission relates to foreign competitors, who are wholly absent from Lecuyer’s story. This is a serious oversight in an industry that has been, from its start, global in scope. These geographic blinders can sometimes have distorting effects on the narrative. For example, in describing Fairchild’s efforts to go after commercial and consumer transistor markets, the author details Fairchild’s pioneering 1963 decision to establish an offshore plant in Hong Kong in 1963 solely in terms of its impacts on transistor costs. Completely absent from this version of the tale is the fact that Hong Kong by this time was a center for consumer electronics exports to the United States and a natural market for consumer electronics components.

Japanese producers had invested heavily in mastering the production of lower-quality consumer-oriented transistors in the late 1950s and were producing more transistors annually than U.S. firms by 1959. Some 55% were incorporated into transistor radios, of which in excess of 70% were being exported. To ignore the intensifying Japanese competition and the rapid rise of consumer electronics production and component assembly in East Asia at the time when the U.S. semiconductor industry started going offshore is to miss an important piece of the puzzle. Increasing Japanese competence in transistors was also a key reason why the technological transition to integrated circuits was so critical to the future of U.S. semiconductor makers.

Lecuyer’s discussion of Fairchild’s strategy is completely silent on its failed 1962 attempt to enter the Japanese market. Blocked by the Japanese government from investing in a local production facility, Fairchild was maneuvered into transferring the Japanese rights to its patents to NEC for a pittance, on onerous terms that later obstructed its attempts to sell products in Japan.

Lecuyer also wears some disciplinary blinders. Good business history blends the individual and institutional with powerful underlying currents of economic and technological forces. So it is distressing to see the author occasionally struggling to deal with questions to which many economists and economic historians have provided well-developed answers. For example, when asking why it was that Fairchild went into the IC business so slowly and ended up chasing spinoff companies founded by its own defecting talent, Lecuyer cites Harvard innovation guru Clayton Christensen in order to argue that even well-managed companies tend to ignore that which is not needed by their existing customer base. The more compelling economic argument is that the returns from commercializing a new innovation are often lower for an unchallenged industry incumbent as compared with an entrant, because a new product will typically cannibalize the sales (and profits) of older products. A new entrant has no existing sales base and no lost profits to deduct, hence a greater return on introducing the innovation. The likelihood of the incumbent facing a challenge using the new technology is minimal until dissatisfied employees have left, at which point the previous unfelt threat materializes and the incumbent is forced to pull the technology off its shelves and play catch-up. The pattern of defection and spinoff is most definitely not unique to semiconductors and is probably best explained as the predictable result of underlying economic logic rather than an idiosyncratic history of poor decisions.

Some underlying quantitative data on the relative sizes of military R&D and production contracts at various semiconductor firms would have greatly helped with the discussion of the role of military procurement in the development of these firms. At least some of this data is available, and digging it out would have been useful. Given that a small set of firms is the focus of most of the book, even a chronology of major defense programs and their role in technological developments within these companies would have been feasible and useful in advancing the analytical argument.

Those interested in other aspects of the development of Silicon Valley will have to turn elsewhere and make connections to the case studies in this book. The interactions of Stanford University with the development of the Bay Area electrical, electronics, scientific instrument, and materials industries over the course of the entire 20th century are only slightly touched on and could be explored in much greater detail. There is almost nothing about the migration of major defense contractors to the Bay Area during and after World War II.

Notwithstanding my personal list of additional discussion topics and data requests, this is an excellent book. Lecuyer is to be commended.

The False Promise of the Scientist ex Machina

In a scene in the movie Annie Hall, Woody Allen is waiting in line to enter a movie and becomes disgusted with a guy near him in the line who is blathering about media expert Marshall McLuhan. Woody finally turns to the guy and tells him that he doesn’t know what he’s talking about. The guy responds that he must be right because he teaches a course about TV, media, and culture at Columbia University. Woody then walks over behind a large marketing placard and emerges with Marshall McLuhan, who proceeds to tell the pompous professor, “I heard what you were saying. You know nothing of my work…. How you ever got to teach a course in anything is totally amazing.” Woody turns to the camera and says, “Boy, if life were only like this.”

To a certain extent, editing Issues in Science and Technology is just like that. I listen to public discussions about public policy that involves science, technology, or health, figure out what expertise would be relevant, and then convince the best expert I can find to write about the topic in Issues. I might not enjoy Woody’s instant gratification, but the general feeling of satisfaction is the same.

I was able to perform a version of this trick at a recent Gordon Research Conference on science and technology policy. (By the way, the Gordon Conference provides an outstanding opportunity to spend several days discussing a wide range of policy concerns with an extremely knowledgeable group of colleagues. The next one is scheduled for August 2008 in Big Sky, Montana.) The group was discussing the most effective ways for the S&T community to participate in public policy debates, and I remembered that Issues had published a piece on this topic by Daniel Yankelovich, the distinguished pollster and political analyst.

I found the article online and immediately developed that Woody Allen feeling. I was one smart—or lucky—editor to have found the perfect person to address the topic. Yankelovich had written about the unwritten contract between scientists and society in the lead article of the very first edition of Issues in 1985. I asked him to revisit the subject for our 20th anniversary edition and to write about what had changed. When we published “Winning Greater Influence for Science,” (http://www.issues.org/19.4/yankelovich.html) I knew it was good. In rereading it, I realized how good.

In his first article Yankelovich found value in science’s ability to achieve significant autonomy in its ability to set the basic research agenda. Twenty years later, he worried about the downside of this autonomy: a worrisome gap between science and public life. The practical result is that “scientists are highly respected but not nearly as influential as they should be. In the arena of public policy, their voices are mostly marginalized.” He sees this as a serious problem for a society that needs the help of experts to guide a fast-moving, technological-driven world.

The gap between science and the public is evident in their fundamentally different worldviews. The rational, orderly world of science seems alien to the irrationality and discontinuity of daily life, and the two domains differ radically in their understanding of key concepts such as theory, risk, balance, weight of evidence, certainty, timeliness, and neutrality. These differences become painfully apparent when scientists and policymakers talk past one another in debates on intelligent design, climate change, or the risks of natural disaster.

Good public policy requires effective engagement by scientists, engineers, and physicians, and Yankelovich argues convincingly that it is the responsibility of the S&T community to take the initiative. They live in the public world of policy just like everyone else, but policymakers need have no connection to the world of science. Equally important, Yankelovich maintains that the outreach from S&T must extend beyond the policy elite to the general public. Public policy can be effective only with broad support, and the public is too smart to put its fate in the hands of a few self-selected experts.

Yankelovich proposes a number of strategies for integrating science into the policy process. The critical goal is to move scientists from the role of outside specialists to that of issue framers. Their aim should not be to deliver final decisions to policymakers but to provide options. Values, politics, economics, and other factors must guide choices among the options, but it will be enormously beneficial if all the options under consideration have a rigorous scientific and technical foundation. In this way, scientific and technical expertise is brought to bear no matter which option is selected, and the experts remain part of the debate to the end. This is far preferable to a scenario in which the S&T community delivers its preferred option and then goes home. Real power and influence come with being at the table as options are discussed and decisions are made.

In addressing outreach to the general public, Yankelovich raises important reservations about the goal of scientific literacy. Creating a population of lesser scientists is not the route to good policy. Rather, we should be thinking about how to enable people to make sound public judgments that take into consideration scientific as well as numerous other factors. The public must first be made aware that certain S&Trelated concerns are actually important public policy issues, a relatively easy task. The second and much more difficult step is to imbue the public with the confidence and commitment to follow through to reach resolution. Some hand-wringing is necessary and desirable, but we have to find ways to move the public forward to action.

In the spirit of science, Yankelovich is putting his theory to the test. His organization Viewpoint Learning in La Jolla, California, manages real world experiments in public participation in policy decisions that are meant to serve as models for sound public engagement. The key to the organization’s process is the stage they call “working through,” which happens between the stage when information about an issue spreads through the public and it becomes a public concern and when decisionmakers determine a course of action. A knowledgeable public in a democratic society cannot be satisfied with this arrangement. They need to be more involved in the process if the resulting policy is to have legitimacy.

Yankelovich and his colleagues at Viewpoint Learning are exploring how people can work through the development of public consensus. The first step is to overcome the wishful thinking that holds onto to the unrealistic hope that consensus can be reached without making trade-offs or compromising on value judgments. Paricipants must be willing to interact with others and to understand their values and priorities. At this stage, more information is not the answer. This process must inevitably confront emotional and ethical questions. This is where expert-prepared scenarios can be useful. Participants need to have reliable information, but they also need to see that their values enter into the choices. The mistake that many experts make is their wishful thinking that decisions can be made without the intrusion of what they consider irrational value judgments.

As it turns out, Daniel Yankelovich is not the expert to call out of the wings to resolve a debate quickly because the core of his message is that there are no ready-made answers to difficult policy questions, no deus ex machina to resolve dilemmas. Apparently, the satisfaction of the expert-in-thewings fantasy can be achieved only on screen. Yankelovich’s solution is to resist the temptation to ask for a quick science fix and instead to have the scientists and other experts at the table with public representatives and policymakers throughout the options-framing and decisionmaking process.

But what about the fantasy of having Woody Allen—or Stephen Colbert—on call to feed me clever quips when I need them?

Forum – Fall 2006

Natural gas crisis?

Gary J. Schmitt’s “Natural Gas: The Next Energy Crisis?” (Issues, Summer 2006) examines a topic that deserves more attention than it has received. Although rising oil prices have preoccupied the U.S. public and policymakers, we are in the midst of a natural gas crunch that is at least as serious. Natural gas, as Schmitt points out, plays an increasingly large role in electricity generation and home heating. Equally important, it is a key ingredient in a wide variety of manufactured products ranging from cosmetics to fertilizers.

Unlike oil, which is traded at a uniform price in a global market, natural gas prices vary widely from country to country. U.S. prices are the highest in the industrialized world, and this fact works to the disadvantage of our manufacturing sector. It has been estimated that natural gas price differentials have resulted in the loss of 2.8 million U.S. manufacturing jobs.

These concerns about natural gas prompted our support for the Deep Ocean Energy Resources (DOER) Act, which passed the House of Representatives on June 29, 2006. The measure opens currently restricted areas of the outer continental shelf to oil and gas exploration, areas that include reserves sufficient to supply enough domestic natural gas to meet U.S. needs for years to come. To win support from the Florida and California House delegations, we included provisions permitting individual states to maintain or reimpose drilling prohibitions up to 100 miles from their shores.

We believe that the DOER Act addresses the growing natural gas crisis explored by Schmitt and does so in an environmentally and fiscally responsible way. The measure is now in the hands of the Senate, and that chamber has the opportunity to avert a predicament that will only worsen if we fail to deal with it.

REP.NEIL ABERCROMBIE

Democrat of Hawaii

REP.JOHN PETERSON

Republican of Pennsylvania


Energy and security

In “The Myth of Energy Insecurity” (Issues, Summer 2006), Philip E. Auerswald argues that “increasing oil imports do not pose a threat to long-term U.S. security” because today’s energy markets make the threat of an economic shock from a severe oil price change unlikely, and high prices will accelerate technological change. Although the U.S. economy may withstand high energy prices, and higher prices have modestly accelerated the pace of technological change, Auerswald misidentifies the source of the energy security threat, exaggerates the likelihood that prices will reduce the pace of consumption, and draws exactly the wrong conclusion.

The United States is more energy-insecure today than at any time in the past 30 years. The energy dependence of the United States and other consuming nations (not the level of imports) is rapidly eroding U.S. power and influence around the world in four ways. First, consuming nations are reluctant to join coalitions to combat weapons proliferation and terrorism because of their dependency on their oil suppliers or their desire to secure access to exploration acreage. Chinese resistance to sanctions on Iran or Sudan and European resistance to pressure on Iran or Russia are good examples of this phenomenon. Second, when exporters have very high revenues, with earnings far in excess of those needed to finance their own budgets, they act with impunity toward their own people, their neighbors, and/or the United States: Witness Russia’s pressure on its neighbors, Iran’s flouting of international pressure regarding its nuclear program, and Venezuelan President Chavez’s competition with the United States for influence in the hemisphere. Third, high revenues are impairing the efficiency of oil markets by encouraging a new resource nationalism that restricts international access to new oil exploration acreage in Russia, Venezuela, and Ecuador, as well as most of OPEC. Most national oil companies are historically highly inefficient and undercapitalized because of the formidable needs of countries to tap their earnings for government budgets rather than reinvestment. Fourth, new nonmarket economies such as China and India are eroding U.S. influence in Latin America and Africa by their willingness to subsidize investment in exploration and to invest without insisting on host country support for transparency, governance, or acceptable human rights practices.

Looking ahead, the trends are terrible: Even at current price levels, global demand is rising rapidly and is likely to double in volume by 2030, with OPEC’s share increasing. Reducing the revenue stream to our adversaries and competitors and using government policy to change the way the world fuels transportation are the only way to reduce this rapidly growing security threat. Change will be slow, given the volume of energy that the world consumes and the investment in existing delivery infrastructure. If high prices were leading to greater fuel economy in the United States and developing Asia, we might have hope that the market would cure this threat, as Auerswald suggests. There is no evidence that this is occurring; U.S. and global demand continues to rise. Energy security is a public good; the market will not provide it. Even today’s prices do not reflect the security externalities of oil consumption. Energy insecurity is not a myth. It is a clear and present danger, and the sooner we accept that, the sooner we will muster the political will to address it.

DAVID L. GOLDWYN

President

Goldwyn International Strategies

Washington, DC

David L. Goldwyn is co-editor of Energy and Security: Toward a New Foreign Policy Strategy.


Philip E. Auerswald’s article brings much common sense to the debate over U.S. oil dependence, which is hyped up by calls for the elimination of U.S. oil imports; nonetheless, in my view we should still be doing more to promote oil conservation.

Auerswald points out that the U.S. oil price is determined on world markets, regardless of how much oil we import. He also argues that the U.S. economy is not especially vulnerable to oil price shocks, given the small share of oil products in gross domestic product. But although the recent trebling of oil prices has not derailed the U.S. economy, a future price shock may have more serious consequences; for example, if the economy is already in a recession, or if currency and other financial speculators are jittery about large trade and fiscal imbalances. Given very tight conditions in the world oil market, any number of economic or political developments might precipitate such a price shock. Economic analyses suggest that an oil tax of roughly $5 per barrel or more might be warranted to address various macroeconomic risks from oil price shocks that private markets fail to take into account.

As regards the environment, Auerswald is right that the most important worry regarding oil consumption is its contribution of greenhouse gases to future climate change. Economists have attempted to quantify the potential damages to world agriculture, coastal activities, human health, and so on from greenhouse gases, even making a crude allowance for the risk of extreme climate scenarios. Although contentious, these studies suggest that a tax of up to $50 per ton of carbon, equivalent to an additional $5 per barrel of oil, should be imposed, so that market prices reflect environmental costs.

Geopolitical costs include constraints on foreign policy due to reluctance to upset major oil producers. Oil revenues may also end up funding insurgents in Iraq, other terrorist groups, or rogue states. However, although this petrodollar flow is of major concern, we have limited ability to prevent it in the near term; even though an oil tax of $10 per barrel would reduce oil imports, it would lower long-run world prices by only around 1.5% at best.

Auerswald is correct that high oil prices are the best way to induce households and firms to economize on all oil uses; on economic efficiency grounds, the government should phase in a tax of around $10 per barrel or more, using the revenues to cut the deficit or other taxes. However, the key to seriously reducing oil dependence over the longer term is to create conditions conducive to the development of oil-saving technologies. This requires R&D investments across a diverse range of prospects (such as plug-in hybrids and hydrogen vehicles). But it also requires that markets anticipate the persistence of high oil prices; to this end, the government might also commit to a floor price by increasing the oil tax in the event that oil prices fall in the future.

IAN PARRY

Senior Fellow

Resources for the Future

Washington, DC


Revamping the military

In “The Pentagon’s Defense Review: Not Ready for Prime Time” (Issues, Summer 2006), Andrew F. Krepinevich Jr. provides a good summary and valid critique of the Pentagon’s recently completed Quadrennial Defense Review (QDR). The review sets out three main potential threats to U.S. security: radical Muslim jihadism, the rise of China, and nuclear proliferation. As Krepinevich points out, the QDR is most illuminating regarding U.S. strategy for dealing with the first of these threats. He may be asking too much of this document, however, in suggesting that it should provide more convincing responses to the second two. The QDR is a Department of Defense (DOD) document, necessarily largely limited to the role of that department in advancing U.S. security. About challenges without a viable military solution, it is necessarily reticent.

As Krepinevich notes, a rising China could, like Germany in the late 19th and early 20th centuries, develop in ways that threaten the international order. Not the most likely development, but one possible enough to justify a hedging strategy. The DOD can help create military capabilities and alliance relationships that hedge against an aggressive China. To the extent that the United States can influence China to move in a different direction, however, that task will fall largely to other agencies: State, Treasury and the U.S. Trade Representative, for instance. One would not look to the QDR for a definitive treatment of those efforts.

Similarly with nuclear proliferation. The QDR may say little about the DOD’s plans to stem such developments, because there is little that the DOD can do to that purpose. Disarming strikes are probably counterproductive. Brandishing the threat of such strikes has already proven so. The current efforts to stem nuclear proliferation largely rest with the State Department. If those efforts fail, as seems quite possible, the DOD will undoubtedly have to adjust military strategy, bolster alliance relationships, and forgo certain military options with respect to nuclear-armed adversaries, as Krepinevich suggests. Elaborating on the steps that the United States might have to take to accommodate itself to a nuclear-armed Iran or North Korea, however, would probably undercut whatever chance remains of forestalling such a development. Again, some degree of reticence may be appropriate at this stage.

Krepinevich is right to point out that although the QDR devotes most of its attention to the unconventional threats faced by the United States, the Pentagon continues to spend most of its money dealing with the conventional ones. This is not entirely illogical. The conventional threats, although less likely, could be far more destructive. Nevertheless, there is a gap between the DOD’s rhetoric and its budget. In a recent directive, the DOD assigned stabilization operations and counterinsurgency an importance commensurate with major combat. This is not yet fully reflected in its programatics. The department is continuing to push toward a smaller, more agile, more highly equipped military designed to fight and win lightning conventional battles. Fifteen years of post–Cold War experience suggests that current force is already more than adequate for that purpose, and that what is needed more urgently is a more numerous, less technologically dependent military capable of long-term commitment and persistent engagement. As Krepinevich suggests, the Air Force and the Navy should remain largely focused on the conventional battle, while the Army and the Marine Corps bolster their capacity to handle this second challenge.

JAMES DOBBINS

Director

RAND International Security and Defense Policy Center

Arlington, Virginia


Nuclear waste standoff

As chairman of the House subcommittee with jurisdiction over the federal government’s nuclear R&D programs, I read Richard K. Lester’s article on reprocessing and the Global Nuclear Energy Partnership (GNEP) (“New Nukes,” Issues, Summer 2006) with interest.

Although I commend him for his vigorous analysis of GNEP, Lester misses the timing, intent, and promise of the program. On count after count, he asserts that interim storage trumps GNEP. But GNEP was never intended to solve any short-term problems. Nor was it designed to revitalize the U.S. nuclear industry by jump-starting the construction of new nuclear power plants in the near term. Lastly, GNEP was not intended to supplant the need for a permanent repository or preclude the option of interim storage.

The purpose of GNEP is to supplement these tools for managing and disposing of our nuclear waste in the long term. It represents a comprehensive strategy that includes the research, development, and yes, the demonstration of a system of technologies and processes that make up an advanced fuel cycle. In developing this cycle, the goals are (1) to extract as much energy as possible from our nuclear fuel; (2) to minimize the volume, heat, and radioactivity of the waste that will ultimately require permanent disposal; and (3) to do so in a way that is economically viable and proliferation-resistant.

CHANGING STANDARDS AND RULES TO ENSURE THE CONSTRUCTION OF A SCIENTIFICALLY DEFICIENT YUCCA MOUNTAIN REPOSITORY IS IRRESPONSIBLE.

One of the technology components of this advanced fuel cycle, UREX+, is well researched and well understood. It makes little sense to delay a demonstration of this reprocessing technology until it is desperately needed. By then it may be too late. Instead, we should use the time we have now to demonstrate that the reprocessing technology works and to further refine it and improve its economic viability.

Other advanced fuel cycle technologies, including advanced recycling reactors, show great promise for minimizing future waste. But they still require significant R&D in the laboratory and through computer modeling and simulation. That is why I do not support any other advanced fuel cycle technology demonstrations until the Department of Energy (DOE) (1) conducts a comprehensive systems analysis of different possible fuel cycle configurations, (2) uses that analysis to develop a detailed R&D plan, and (3) submits the plan to peer review before it is finalized.

I do not believe that this approach to the development of an advanced fuel cycle threatens the revitalization of the nuclear industry in America. Economics today isn’t economics forever. Lester fails to even mention the cost of addressing global climate change. I would rather see DOE develop a solid understanding of the technologies and their costs so that policymakers and investors alike will know the conditions under which these technologies will succeed in the marketplace of the future.

For these reasons, the nuclear industry and others hoping for a nuclear renaissance should support GNEP as another option for ensuring the long-term viability of nuclear energy in the United States.

REP. JUDY BIGGERT

Chairman, Subcommittee on Energy

House Committee on Science


We discuss two possibilities not mentioned by Richard K. Lester in his excellent analysis of President Bush’s Global Nuclear Energy Partnership (GNEP) or in his conclusion that the government should give priority to accepting spent reactor fuel from the utilities for interim storage, as opposed to committing itself to costly, premature, and unproven fuel-reprocessing and fast-reactor technologies for burning and eliminating the minor actinides as well as plutonium.

Interim storage under government auspices, with the Department of Energy (DOE) beginning to take title to the spent fuel (as it’s been legally obligated to do since early 1998), is indeed now called for pending a redesign of the troubled geologic repository project at Yucca Mountain in Nevada and a licensing of that repository in a three-to-four-year proceeding before the Nuclear Regulatory Commission (NRC).

We think that by far the quickest and best means of accomplishing this is for the utility consortium Private Fuel Storage (PFC) to establish the storage facility, licensed by the NRC in February, on the reservation of the Skull Valley band of Goshute Indians in Utah, about 50 miles southwest of Salt Lake City. This storage project, initiated by PFS eight years ago under a contract promising handsome (but as yet unrevealed) benefits to this tiny band of Goshutes, has been bitterly opposed by the state of Utah throughout the tortuous licensing effort before the NRC. Up to 40,000 metric tons of spent fuel could be accommodated there in dry-cask storage, or nearly two-thirds of all that is intended for Yucca Mountain under present law.

But for a variety of political and financial reasons, this storage facility won’t come into being unless the U.S. government gets behind it. Utah and its congressional delegation continue to fight the project, through appeals to the Bureau of Land Management (which is still to approve it) and by discouraging the utilities that make up PFS from actively pressing on to the next stage.

Lester calls for moving spent fuel from reactor sites to “one or a few secure federal interim storage facilities” but doesn’t suggest where this might be. Candidate sites surely would have to be on federal reservations in the eastern half of the country, where most of the nuclear stations are located, and the most likely of all might be the Oak Ridge reservation in Tennessee and the Savannah River Site in South Carolina. But massive public opposition could ensue, and even if state acceptance were somehow finessed, getting the project approved by Congress and through licensing and court challenges might take at least a decade.

The PFS site in Utah, on the other hand, might be up and running within about three years if Congress adopts legislation making it the destination for spent fuel accepted by DOE from the utilities, consistent with the Nuclear Waste Policy Act. John Parkyn, board chairman and CEO of PFS, has urged the responsible House and Senate committees to respond accordingly, but so far neither Congress nor the Bush administration is moving to make this happen. Yet hundreds of millions of dollars in annual savings for the government are at stake.

Achieving a centralized solution to spent fuel storage should take the pressure off the Yucca Mountain project for early licensing and encourage a deliberate, careful redesign of the repository that is consistent with the site’s natural characteristics.

As we endeavored to explain in our piece “Proof of Safety at Yucca Mountain” in Science (October 21, 2005), the presence of oxygen and high humidity there in the “unsaturated zone” high above the water table makes the most recent DOE design exceedingly problematic. In this design, waste containers would have a corrosion-resistant nickel alloy outer shell and be placed beneath a titanium “drip shield.” But modeling performance assessment from exceedingly complex corrosion chemistry over hundreds of thousands of years is not a credible undertaking.

We advocate a capillary barrier concept, wherein first a layer of coarse gravel is placed around the waste containers and then a layer of fine sand is draped over the gravel. Any water dripping from the tunnel ceiling would be seized by strong capillary forces in the sand and moved slowly away. But proof of safety turns on the gravel layer, where capillary forces are absent. The containers ultimately fail because of corrosion from the water vapor and oxygen that are everywhere present. But the radioactive elements that emerge would form a thin coating on the surfaces of the gravel particles and defuse so slowly within the gravel as to be effectively trapped.

Compared to corrosion chemistry, such diffusion is a far simpler physical process that lends itself to measurement in a laboratory mockup and to robust extrapolations over vast time periods. Proof of safety in an absolute sense is beyond reach, but the capillary barrier concept deserves careful, unhurried testing and analysis.

Having a place for spent fuel storage at Skull Valley should do much to foster the right conditions and attitudes for the trial of all promising new design concepts.

LUTHER J. CARTER

Independent Journalist

Washington, DC

THOMAS H. PIGFORD

Professor Emeritus

University of California, Berkeley


The authors of “Nuclear Waste and the Distant Future” (Per F. Peterson, William E. Kastenberg, and Michael Corradini, Issues, Summer 2006) believe that “a key regulatory decision for the future of nuclear power is the safety standard to be applied in the licensing of the radioactive waste depository at Yucca Mountain, Nevada.” Implied in their argument endorsing the Environmental Protection Agency’s (EPA’s) proposed unprecedentedly high limits on risk to the public from a Yucca Mountain repository is the fear that the application of conventional risk regulation and protection principles for nuclear facilities might result in Yucca Mountain not being licensable. The EPA seemed to share the same fear when faced with the Supreme Court ruling that its standard for Yucca Mountain must include compliance assessment at the time of projected maximum risk.

Peterson et al.’s, demand for regulatory equity with hazardous waste regulation claims that “the longest compliance time required by the EPA is 10,000 years for deep-well injection of liquid hazardous wastes,” but neglects to mention that this injection regulation also specifically extends for as long as the waste remains hazardous.

The EPA’s proposed bifurcated regulation, touted by Peterson et al., leaves the EPA standard intact for the first 10,000 years, at a mean all-pathways individual dose of 15 millirems per year, with a separate groundwater protection standard that conforms with the EPA Safe Drinking Water Act standard of 4 millirems per year. For the period from 10,000 years to one million years, the protective groundwater standard is eliminated and the unprecedented median dose limit of 350 millirems per year is established. Because of the broad range of uncertainty in the failure rate caused by corrosion of the waste package in the Department of Energy’s performance model, the median dose limit of 350 millirems per year is equivalent to a mean dose of about 1,000 millirems per year. No regulatory body in the world has set such a high dose limit for the public’s exposure to anthropogenic radiation.

Peterson et al. are promoting a Yucca Mountain safety standard that defies three long-standing principles in the international community of nuclear waste regulation:

  1. The current generation should not impose risks on future generations that are greater than those that are acceptable to the current generation.
  2. The risks to the public from the entire nuclear fuel cycle should be apportioned among the various activities, with waste disposal representing a small portion of a total dose limit of 100 millirems per year. The National Research Council recommended a range for waste disposal of 2 to 20 millirems per year for a Yucca Mountain repository.
  3. Highly variable natural radiation background is not a reasonable basis for setting health-based regulatory limits on public exposure to anthropogenic radiation.

Changing standards and rules to ensure the construction of a scientifically deficient Yucca Mountain repository is irresponsible. Thwarting the international consensus on safety principles to meet this end is irrational.

ROBERT R. LOUX

Executive Director

Agency for Nuclear Projects

Office of the Governor

Carson City, Nevada


Per F. Peterson, William E. Kastenberg, and Michael Corradini discuss an important issue: What are appropriate standards for hazardous material disposition when the time scales for acceptable risk extend well beyond the realm of human institutional experience? How should the benefits and risks of alternatives be weighed in the regulatory process?

The authors note that 10,000-year standards to protect human health, although rare, are not new. However, the Environmental Protection Agency (EPA) is now required to set a radiation standard for nuclear spent fuel disposition at the proposed Yucca Mountain (YM) repository for a time period that is at least an order of magnitude longer. Its proposed “two-tiered” standard aroused suspicion in some quarters because it came after the Department of Energy’s (DOE’s) repository performance characterization showed long-term individual exposure levels at the compliance point well above the 10,000-year 15-millirem standard and appeared to “curve-fit” around DOE’s projection. Nevertheless, I concur with the authors that the EPA’s proposal has merit for initiating an objective discussion. Further, any methodology developed for these very long time scales should be robust enough to apply to other hazardous waste disposal challenges.

The article is less convincing when discussing “other risks,” most of which are irrelevant to a nuclear waste licensing procedure. It is also selective, mentioning coal ash but not uranium tails, for example, and not discussing the risks of nuclear proliferation. This is much more relevant than the other risks discussed (except climate change) in light of global discussions about reprocessing, which links waste management mitigation with the separation of weapons-usable plutonium (and possibly other actinides) from the waste stream.

Moving from risks to benefits, “carbon-free” nuclear power would clearly benefit from regulations that accommodate climate change risks. However, the authors argue that the benefits equation should also weigh YM sunk costs and the ongoing costs of the government’s failure to move spent fuel. This is a serious mistake: It could undermine the core principle that YM licensing should be based on rigorous scientific and technical evaluation to develop standards that protect public health. Also, the federal liability for not moving spent fuel hinges on federal ownership, regardless of where it is stored, not directly on YM operation.

Finally, the article’s focus may lead some readers to infer that the standard beyond 10,000 years is the principal obstacle to YM operation. In my view, meeting the 10,000-year standard will be at least as challenging. Compliance depends on near-perfect integrity of the engineered barriers for thousands of years, with little empirical evidence to support this conclusion.

The authors correctly note the scientific consensus about the effectiveness of deep geologic disposal, primarily because deep underground environments change very slowly. This general consensus does not, however, apply to any specific site, for which judgments must be based on extensive site-specific characterization, measurement, and modeling. Further, it is arguable whether the YM storage site qualifies as “deep underground”: Waste emplacement above the water table implies considerable dependence on the surface environment (for example, a wetter surface environment could develop on a scale far short of geological times). Objective evaluation of these and other scientific and technical issues should be the only guide in the licensing process.

ERNEST J. MONIZ

Cecil and Ida Green Professor of Physics and Engineering Systems

Massachusetts Institute of Technology

Cambridge, Massachusetts

Ernest J. Moniz is a former undersecretary of the U.S. Department of Energy.


Per F. Peterson, William E. Kastenberg, and Michael Corradini support the new Environmental Protection Agency standard for nuclear waste disposal at Yucca Mountain, Nevada. They argue strongly for developing a repository for high-level nuclear waste there, in part relying on the $8 billion already invested in Yucca characterization as a justification for going forward (a specious argument, to be sure).

A number of the authors’ arguments show a basic lack of understanding of geology and the geologic issues associated with nuclear waste disposal. The disposal of nuclear waste in a mined repository and the prediction of repository performance over time are heavily dependent on a solid understanding of the geology and evolution of the Earth system over time.

The authors correctly identify the strong consensus on geologic repositories as a solution to the disposal of nuclear waste. I agree with them wholeheartedly on this issue. But not all regions are created equal, and some sites are simply not suitable for geologic disposal. I would argue that the jury is still out on Yucca Mountain.

The authors argue that “Environments deep underground change extremely slowly with time …and therefore their past behavior can be studied and extrapolated into the long-term future.” If only it were so. The only difference between underground and surface environments is that the former are not subject to the processes of erosion, which affect the area on a short-term basis. An underground area is still subjected to tectonic processes such as volcanism and seismicity, both of which occur at Yucca Mountain. Simply locating a site underground does not provide a higher guarantee of predictability over time.

The authors’ faith in the results of the Department of Energy’s (DOE’s) performance assessment model of Yucca Mountain further indicates their lack of understanding of Earth systems. This model is highly uncertain and cannot be validated and verified (although DOE claims that it has done so). Thermodynamics teaches us that in an open system, which a geologic repository is, we cannot know all the input parameters, processes, and boundary conditions that might affect the system over time. To begin with, complete kinetic and thermodynamic data sets of various phenomena do not exist as inputs into the model. Therefore, the results of such models cannot be used as a source of real, reliable information. For instance, the model suggests that “peak risk occurs in about 60,000 years,” but that is a highly uncertain number.

The authors claim that “YM would have the capacity to store all the waste from the nuclear electricity generation needed to power the country for centuries.” What does this mean? What type of energy future do they imagine, and how much waste will be produced? There are, in fact, geologic limits on the capacity of a repository at Yucca, including the locations of faults and fractures, the extent of repository lithology, and the extent of the low water table.

It will be possible to solve the problem of nuclear waste using geologic repositories. Care must be taken to select an appropriate location, based on detailed geologic understanding and analysis, not simply because it is politically expedient to do so.

ALLISON MACFARLANE

George Mason University

Fairfax, Virginia


Per F. Peterson, William E. Kastenberg, and Michael Corradini present the case that certain Environmental Protection Agency (EPA)–proposed standards for Yucca Mountain are reasonable and suggest that a similar approach should be applied to the management of long-lived hazardous waste, the use of fossil fuels, and other human activities. I have no disagreement with the authors’ various technical assertions, but I am skeptical of the authors’ recommendation.

They correctly note that the proposed standards for Yucca Mountain are far more stringent than those governing other societal activities. Although the proposed EPA standards for Yucca Mountain would require a demonstration of compliance with a dose limit through the time of peak risk (several hundred thousand years in the future), the time horizon for the evaluation of other societal risks, such as those from disposal sites for chemical wastes, is far shorter, if such risks are evaluated at all. Although I do not dispute as a conceptual matter the authors’ argument that all risks should be evaluated and weighed on a common basis, the practical and political realities push in a different direction. Our society views nuclear risks as different from the risks arising from other activities. Although experts may disagree, the safety restrictions on nuclear activities no doubt will remain much more stringent than those placed on other activities posing equivalent or greater risk.

Even more important, it is doubtful to me that society could readily apply the authors’ suggestion that we should apply limits like those being proposed for Yucca Mountain to human activities more generally.As noted by other authors, the proposed EPA standards establish radiation limits for periods far into the future that are below background levels in many parts of the world. The authors correctly observe that although there may be a theoretical risk from doses at those levels, no detectable incremental risk has in fact been observed. The application of such a stringent standard to human activities more broadly would require widespread change.

To take the authors’ example, in order to limit the risks arising from increased CO2 concentrations in the atmosphere, such a standard might require shutting down much of our fossil-based electrical generation (coal fuels 54% of U.S. electrical generation) and imposing drastic restrictions on the use of petroleum-fueled automobiles or trucks. Similarly, the use of many natural resources (such as water and natural gas) more quickly than they can be replenished presents risks to future generations, if only from the denial of supply, that might exceed such a standard. The reduction of all possible future hazards to the low levels required by the Yucca Mountain standards would likely require radical alteration of current societal activity.

I agree that we should seek to undertake the long-term evaluation of risks and should strive for an appropriate balance of risks and benefits, but I doubt that the authors’ recommendation for the widespread limitation of risk to the level that would be established by the proposed Yucca Mountain standards can be accomplished without drastic reordering of societal priorities.

RICHARD A. MESERVE

President

Carnegie Institution of Washington

Washington, DC

Richard A. Meserve is a former chair of the Nuclear Regulatory Commission.


Combating pandemics

Henry I. Miller’s concerns about our ability to prepare for and respond to a potential flu pandemic encompass many salient points (“DEE-FENSE! DEE-FENSE!: Preparing for Pandemic Flu,” Issues, Summer 2006). The degraded U.S. vaccine production capacity and R&D are indeed critical issues. There is no doubt that vaccines are essential public health tools; however, other issues surrounding avian influenza may be more important to protect our well-being.

We do not know whether highly pathogenic H5N1 avian influenza (H5N1 HPAI) virus will become a human pandemic, but we do know that it is an avian pandemic right now. Imagine that we faced a toxin in a city’s water. We would not focus on stockpiling medications for people who might become ill. We would instead begin removing the toxin and also ramp up treatment capacities, just in case. Unfortunately, the national pandemic plan focuses almost exclusively on stockpiling human pharmaceuticals, doing little to decrease actual risks. It focuses roughly 86% of a $7.1 billion budget on human vaccines and therapeutics, 4% on surveillance, and 9% on state and local health preparedness. Evidently, knowing where a disease is and having localities prepared for outbreaks are only one-eighth as important as a questionable plan for stockpiling unproven vaccines! Worse yet, funding for the U.S. Department of Agriculture’s poultry protection work is less than 0.1% of the national plan’s budget.

AVIAN INFLUENZA, TO DATE, HAS CAUSED MORE HUMAN MORBIDITY AND MORTALITY BY KILLING POULTRY, AND THEREBY RUINING LIVELIHOODS AND INCREASING MALNUTRITION, THAN BY DIRECTLY KILLING PEOPLE.

H5N1 HPAI has caused over 4,000 outbreaks in some 56 countries across Asia, Africa, and Europe. It has killed 141 people, while hundreds of millions of domestic birds have died or were destroyed in control attempts. Beyond the animal welfare and environmental concerns raised by this catastrophe, those birds represented the primary quality protein source, and between 10 to 25% of family income, for the millions of people that keep backyard poultry in affected areas. Hence, avian influenza, to date, has caused more human morbidity and mortality by exterminating poultry, and therefore ruining livelihoods and increasing malnutrition, than by directly killing people.

Further, should today’s H5N1 HPAI reach North America, it’s unlikely to get established in modern biosecure poultry farms. Few Americans handle high-risk birds, and cooking destroys the virus, so human exposures will be low. However, it will destroy markets, devastating peoples’ lives and destroying rural economies. For example, when H5N1 HPAI hit Europe last fall, poultry markets plummeted and are still depressed by 10 to 70%. Additionally, a January 2006 Harvard survey found that 71% of respondents would stop or severely cut back on poultry purchases if H5N1 HPAI came to our shores, even if no citizens were infected. Hence, even without increased human infectivity (and so, with little need for human vaccines), this virus could severely damage our country.

Because “bird flu” is now a bird disease, isn’t keeping it out of our birds, before it gets to people, a better way to protect humans? Our current national plan could leave us with economic disaster and potential food insecurity. But we might eventually have plenty of unused vaccines in the fridge. A more balanced threat response plan is warranted.

BARRETT D. SLENNING

Animal Biosecurity Risk Management Group

North Carolina State University

Raleigh, North Carolina


Electric reliability

Starting with an unexpected premise—that the United States ranks toward the bottom among developed nations in terms of the reliability of its electricity service—the three leaders of Carnegie Mellon’s Electricity Industry Center lay out a compelling case for looking to the experience of other industries for ways to improve in the United States (Jay Apt, Lester B. Lave, and M. Granger Morgan, “Power Play: A More Reliable U.S. Electric System,” Issues, Summer 2006).

They observe that although the new Electricity Reliability Organization (ERO), authorized in the Energy Policy Act of 2005, will be an important gesture to boost reliability, unless the ERO is encouraged by its federal regulators to do more than merely lock in place the status quo, it will be unlikely to do what is needed.

Specifically, the authors instruct us by shining a light on the experience of the U.S. nuclear industry in the post–Three Mile Island world in imposing on itself, through the establishment of the Institute of Nuclear Power Operators (INPO), a rigorous and metrics-driven commitment to promote excellence in both the safety and reliability of nuclear power plants. At the core of the commitment of the senior executives of power companies that make up INPO’s board is a recognition that “all nuclear utilities are affected by the action of any one utility.”

What’s striking about the authors’ message is that it challenges the industry to think and learn outside the box. For one thing, for the past decade, the conventional wisdom has held that the most important thing needed to improve electric system reliability was the passage by Congress of mandatory reliability standards, along with the establishment of an ERO. The authors say that unless the industry does a lot more than encode and enforce today’s standards, the system will continue to underperform. The authors warn that the new ERO, likely to spring from the well-established industry organization the North American Electric Reliability Council, will rely on industry consensus, rather than excellence, as the basis for setting reliability standards for the industry.

INPO, for example, has found that excellence in the reliable and safe operation of nuclear plants can be achieved best by combining performance objectives measured by metrics and requirements adopted by government regulators. As the authors say, “Industrywide performance objectives are difficult to meet every year, but provide goals and measurable outcomes; the [Nuclear Regulatory Commission] regulations provide a minimum floor for operations.”

The most instructive observations in the article are those that call for federal regulators to require the ERO to periodically review all standards and to modify its guidelines for investigating reliability events so that they stress human factors and corporate support of operational personnel; to impose on the ERO and the industry requirements for creating, collecting, and publishing more transparent reliability metrics; and to institute a “best-practices” organization outside of the ERO’s standards and compliance organization. Most important is the authors’ general warning that “the ERO will fail to improve reliability significantly unless generators, transmission and distribution owners, and equipment makers are convinced that they face large penalties for substandard performance.”

In today’s increasing service economy in the United States, we cannot afford to have second-best electric system reliability. Now that we’ve enacted the legal underpinnings for mandating improved reliability, we need to push for excellence. Apt, Lave, and Morgan have given us a useful set of instructions for getting there.

SUSAN F. TIERNEY

Managing Principal

Analysis Group

Boston, Massachusetts

Susan F. Tierney is a former Assistant Secretary for Policy at the U.S. Department of Energy.


The educated engineer

As the executive director of the Accreditation Board for Engineering and Technology (ABET), the organization responsible for ensuring the quality of postsecondary engineering programs, I am troubled by the premise of “Let Engineers Go to College,” by C. Judson King (Issues, Summer 2006). King repeatedly references the “narrowness” of undergraduate engineering education and uses this purported narrowness as support for his argument that the master’s rather than the bachelor’s be the first professional degree in engineering. Although I am not speaking for what should be the first professional degree, I am speaking against the premise as outlined in this article. I am afraid there is a significant disconnect between today’s actual undergraduate curriculum and King’s perception of it.

Those familiar with the evolution of ABET’s accreditation criteria will recognize King’s premise as the familiar—and then-warranted—war call of the 1980s and early 1990s. Today’s curriculum, however, includes the very elements that King describes as lacking. In fact, I will respond to King’s statements by quoting directly from ABET’s Criteria for Accrediting Engineering Programs.

King writes that engineers “must now look outward and interact directly with non-engineers”; ABET’s criteria for students include “an ability to function on multi-disciplinary teams.”

King writes that engineers “must understand and deal with other countries and other cultures…understand society and the human condition…[and have] exposure to a variety of outlooks and ways of thinking.” ABET’s criteria call for “the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context…[and] a knowledge of contemporary issues.”

King calls for “thinking and writing skills in a variety of contexts”; ABET calls for “an ability to design a system, component, or process to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability…an ability to communicate effectively.”

King sees a need for “the wherewithal for flexibility and movement”; ABET recommends “a recognition of the need for, and an ability to engage in life-long learning.”

ABET-accredited engineering programs are expected to have significant involvement with their constituents, especially when developing and evaluating formal learning outcomes and professional objectives for their graduates. Those constituents can range from parents and alumni to graduate schools and employers. In short, if employers of a program’s graduates want them educated to have the “flexibility to move into non-engineering areas or management,” then the program must take that into consideration when developing its curriculum.

King states that “The environment for engineers and the nature of engineering careers in the United States are changing in fundamental ways.” Engineering education has also changed significantly in recent years in response to the needs of society and employers and the wishes of the students themselves. What are these changes? ABET commissioned the Penn State’s Center for the Study of Higher Education to determine whether the engineering graduates of 2004 are better prepared than those of a decade ago. The answer was a resounding “yes”!

Whether the undergraduate degree will remain the first professional degree is a matter for ABET’s member societies to decide. Regardless of the outcome of that decision, however, I would argue that today’s undergraduate engineering curriculum is richer and more diverse than perhaps it’s ever been. It is also more flexible and allows educators and administrators to be innovative in achieving their unique programmatic and institutional goals.

GEORGE D. PETERSON

Executive Director

ABET

Baltimore, Maryland


In his excellent article, C. Judson King raises a number of issues that are extremely pertinent to the future of engineering education and practice. The notion of a broadly trained engineer, or a “renaissance engineer” with substantial international experience, is more relevant than ever in today’s world for two practical reasons (beyond the already compelling rationale that exposure to a variety of subjects is beneficial for the students’ own personal development):

(1) Meeting the enormous challenges, especially those pertaining to human health and the environment, facing peoples and societies worldwide will require the attention and skills of engineers. They, in turn, must have a good understanding of these problems and of the interrelationships between technological change and societal development; and students with undergraduate degrees in engineering must make their way into a number of professions, ranging from finance to medicine to law to management consulting to public service, where a broad perspective is very valuable.

But at the same time, engineering subjects are sufficiently complicated that it is impossible to gain mastery of an area in four years. Thus a Master’s in Engineering (M.Eng.) makes eminent sense as a way of giving students in-depth training in specific areas, including engineering approaches such as design. Although many schools already have separate M.Eng. or five-year combined bachelor’s and master’s programs, there needs to be a broader and more formal recognition (and accreditation) of a M.Eng. track, parallel to other professional disciplines.

We also believe that the engineering education profession has been generally remiss in ensuring that students get broad training at an undergraduate level, although many schools have made efforts in this regard. At Harvard, we offer a Bachelor of Arts (A.B.) track in addition to an Accreditation Board for Engineering and Technology–accredited Bachelor of Science (S.B.) in Engineering Sciences. The former option provides a foundation in engineering, but with a much broader range of choices outside of engineering, and the latter gives students a strong base in engineering fundamentals, with an opportunity to focus on specific areas. Such degree tracks, we believe, provide a reasonable combination of core competence and flexibility, but often there is tension between these two goals.

Furthermore, engineering students throughout the country rarely receive sufficient exposure to the interactions between technology and society: the ways in which technology can, and must, make a positive societal impact (in meeting, for example, the enormous challenge of climate change), and the ways in which societal concerns shape the development of technology (genetically modified crops being a prime example). We are in the process of developing courses that will further broaden our engineering curriculum and ultimately lead to a major in Technology and Society. Many other schools are also moving in the same direction. Such new options will give students some grounding in these kinds of issues in addition to traditional engineering skills.

(2) We also very much agree with King that engineering must become an integral part of the general education curriculum in universities. In today’s world, it is as important for students to learn about engineering and technology as it is to study history, literature, and the fine arts. Once again, we are taking steps at Harvard in this direction by offering courses that seek to transmit the philosophy, excitement, and poetry of engineering to a broader student population.

In the end, we believe that we as educators will be more successful, and engineering practice and research will be richer, if we explore such options that recognize not only the evolving nature of engineering and its place in the world but also the varied and changing needs of students. Designing products to meet specific needs under multiple constraints is the hallmark of a good engineer. Should we not be doing the same in engineering education?

VENKATESH NARAYANAMURTI

HOWARD STONE

MARIE DAHLEH

AMBUJ SAGAR

Division of Engineering and Applied Sciences

Harvard University

Cambridge, Massachusetts


Congratulations to C. Judson King for writing such a provocative article! The points are well founded and in my opinion correct.

The American Society of Civil Engineers has been working for the past 10 years to raise engineering educational expectations in the future. It has been slow going, but we are making progress.

Engineers are marching to the tune of irrelevance in the 21st century. We argue and debate details that miss the larger context and picture. We seem to have lost sight of thinking about and serving society at large. We seemed to be destined to be technicians, happy to allow the corporate world to tell us what to do and when to do it. We are not participating in substantive issues affecting society, such as living in a world that has finite resources. When are we going to start letting our voices be heard regarding energy and the environment? We are doing irreversible damage to the globe while at the same time depleting our natural resources. Why can’t we be active participants in shaping tomorrow, versus reactive agents doing what we are told?

We are at a critical juncture in engineering education in terms of content, scope, and length. The question is, will we settle for what we have or reach for what we can and should be? Time will tell.

JEFFREY S. RUSSELL

Professor and Chair

Department of Civil and Environmental Engineering

University of Wisconsin–Madison


C. Judson King lays out the case for liberalization of the engineering curriculum in order to produce future engineers who are able to address the challenges of the 21st century. I agree with him. But I also think that engineering education should reenvision itself as a “service” discipline. King implies this possibility when he notes, “[t]he bachelor’s curriculum should provide enough variety that a graduate would also be well prepared for careers other than engineering.”

In my view, we need to go further. We need engineering classes expressly designed to be taken by non-majors. Such an approach serves two purposes: First, it increases awareness of engineering within the general population. Such awareness would be immensely valuable in a citizenry facing, as King notes, pervasive technologies and technological choices in virtually every aspect of their lives. Second, such an approach might serve to increase interest in engineering as a career field. For example, under the leadership of then-dean Ioannis Miaoulis, the Tufts University College of Engineering saw a net increase in the number of engineering majors after it began offering creative introductory courses. Miaoulis himself taught thermodynamics via a cooking class.

There are additional collateral benefits to be achieved. Once we have college-level engineering courses designed for non-majors, it should be easier to design pre-college “engineering” courses beyond the relatively few that currently exist. This will further increase awareness about engineers and what they do; provide practical frameworks within to teach science, mathematics, and technology; and possibly increase the number of students interested in pursuing engineering as a career field.

Transitioning to such a regime will not be easy. There are significant pressures that militate against it, not least of which will be considerations of faculty workload. I assert that it is in the enlightened self-interest of the engineering profession to surmount these pressures. After all, the creed of the professional engineer is to dedicate their professional knowledge and skill to the advancement and betterment of human welfare. Increasing awareness of the engineering profession is an underutilized means of achieving that end.

NORMAN L. FORTENBERRY

Director

Center for the Advancement of Scholarship on Engineering Education

National Academy of Engineering

Washington, DC

Glide Path to Irrelevance: Federal Funding for Aeronautics

The nation’s 100-year preeminence in aviation is in serious jeopardy. So, too, are the medium- and long-term health and safety of the U.S. air transportation system. The peril stems from a lack of national consensus about the federal government’s role in civilian aviation generally and about the National Aeronautics and Space Administration’s (NASA’s) role in aviation technology development in particular. Aeronautics—the first “A” in NASA—is now vastly overshadowed in resources, managerial attention, and political support by the agency’s principal mission of space exploration and discovery. Indeed, most people have no idea that NASA is the leading, and essentially the only, agency that is organizationally and technically capable of supporting the nation’s leadership in air transportation, air safety, and aircraft manufacturing.

The aeronautics community supports an expansive public R&D program, with NASA playing a lead role. But during the past seven or eight years, successive administrations and Congresses have reduced NASA’s aeronautics budget without articulating how the program should be scaled back. In these circumstances, NASA has tried to maintain a sprawling program by spreading diminishing resources across existing research establishments and many objectives and projects—too many to ensure their effectiveness and the application of their results.

With its plans to return humans to the Moon and eventually send them to Mars, the Bush administration has added to the problem by further reducing the aeronautics budget. The budget request for fiscal year (FY) 2006 and succeeding years anticipates a 50% reduction in NASA’s aeronautics R&D spending and personnel by 2010. The current NASA management understands that such resources will not support an expansive program and proposes to refocus efforts on fundamental research, avoiding costly demonstration projects. That may appear to be a reasonable strategy given the current outlook for funding, but it risks losing the support of industry stakeholders and other intended users of NASA-developed technologies. They operate in a risk-averse environment and often depend on outside suppliers to deliver well-proven technologies. This is especially the case in public goods research, such as safe, efficient air-traffic management and environmentally benign aviation operations, in which the argument for NASA involvement is strongest. Thus, with either its previous peanut-butter-spreading approach or its current fundamental research focus, we believe that the agency is on a glide path progressively leading to the irrelevance of the first A in NASA.

The administration’s 2006 budget proposal exposed the lack of agreement between the government and the aeronautics community about the federal government’s role in aeronautics. NASA’s former associate administrator, Victor Lebacqz, acknowledged as much in defending the president’s budget request before the House Science Committee. He said that there currently are two contending points of view. One point of view, reflected in a host of remarkably consistent blue-ribbon commissions and national panel reports, is that the aviation sector is critically important to national welfare and merits government support to ensure future economic growth and national competitiveness. This view implies an expansive public and private R&D program. The other view, reflected in the administration’s budget submission, is that the aviation industry is approaching maturity, with aviation becoming something of a commodity, and that the government can therefore retrench and leave technology development to the private sector. Lebacqz neglected to mention what in our view is the most compelling case for reinvigorating national investment in aerospace technologies: clear public-good objectives— mobility, safety, and environmental protection—served by NASA’s R&D involvement.

At any rate, the proposed retrenchment had a galvanizing effect. Congress rejected the proposed cut and restored NASA’s Aeronautics Research Mission Directorate (ARMD) budget. At the same time, Congress passed the NASA Authorization Act, which called on the administration to prepare a policy statement on aeronautics as a basis for further discussion with Congress. A new NASA administrator and associate administrator withdrew proposed plans to scale back support for aeronautics and set to work on a new plan for ARMD.

These were encouraging signs that a potentially fatal retrenchment could be avoided. But in his FY 2007 budget proposals for NASA, the president proposed a further 18% cut in aeronautics, to $724 million. This is in comparison to the $16.8 billion total NASA request, mostly targeted on space. If enacted, the resulting aeronautics budget in real terms would be less than one-half what it was in 1994.

Thus, it is long past time for a sustained high-profile national dialogue about the public value of national investments in aeronautics, distinct from space, and the very real continuing threat to NASA’s unique role and capabilities in aeronautics.

World leadership in air transportation and aircraft manufacturing is widely viewed as a cornerstone of U.S. economic welfare and national security. Department of Transportation statistics are revealing. U.S. residents already have the highest per capita level of air travel in the world, and use is rising steadily. Domestic commercial flights, the backbone of the U.S. travel industry, carried 660 million passengers in 2005. The Federal Aviation Administration predicts one billion passengers by 2015. General aviation already flies 150 million more passengers than do commercial flights. Air cargo has grown 7% annually since 1980, by far the fastest-growing mode of freight transportation during the past two decades. It now accounts for more than one-quarter of the overall value of U.S. international merchandise trade, steadily gaining ground on the maritime sector, which has a two-fifths share. JFK International Airport alone handled $125 billion worth of international air cargo in 2004; this total ranks ahead of the value of cargo through the Port of Los Angeles, the nation’s leading maritime port.

Aviation’s national economic impact does not stop with the air transport system. Aerospace exports in 2005 made up nearly 30% of all U.S. exports in the category that the Department of Commerce labels “advanced technology products.” Census Bureau trade figures indicate that aerospace, mainly airplanes and parts, delivered a surplus to the United States of nearly $37 billion in 2005, which significantly defrayed an $82 billion deficit in all other advanced technology categories. Indeed, for years aerospace has regularly logged the widest positive trade margin among U.S. manufacturing industries.

As for aeronautics’ military significance, the Department of Defense’s (DOD’s) guiding doctrine relies significantly on air superiority and aircraft rapid strike and force-deployment capabilities. Moreover, a variety of aeronautics technologies, such as stealth and unpiloted remote-sensing aircraft and airborne command and control systems, have transformed military operations not only in the air but on the ground and at sea. The centrality is reflected in procurement strategy: A 2005 RAND analysis found that the DOD spends on the order of a third of its procurement budget on aerospace, including about $40 billion every year to buy aircraft and other air systems.

Nonetheless, recent signs that the nation’s preeminence in aviation may be imperiled have occasioned deep concern. At least 12 studies of U.S. activity in aeronautics published during the past half decade by the National Academies and various industry and government bodies have called attention to the vulnerability of the United States’ traditional leading position. In its final report, the Commission on the Future of the United States Aerospace Industry, widely known as the Walker Commission, stated that “the critical underpinnings of this nation’s aerospace industry are showing signs of faltering” and warned bluntly, “We stand dangerously close to squandering the advantage bequeathed to us by prior generations of aerospace leaders.” In 2005, the National Aerospace Institute, in a report commissioned by Congress, declared the center of technical and market leadership to be “shifting outside the United States” to Europe, with a loss of high-paying jobs and intellectual capital to the detriment of the United States’ economic well-being.

The clear message is that the United States must overcome a series of major challenges—to the capacity, safety, and security of the nation’s air transportation system, to the nation’s ability to compete in international markets, and to the need to reduce noise and emissions—if the nation’s viability in this sector, let alone international leadership, is to be ensured.

National needs fall into four broad areas. The first three involve classic public or quasi-public goods in which there is little disagreement that the federal government should play a central role. These categories are air traffic control, emissions and noise reduction, and air safety and security. In practice, the central federal role falls to NASA. No other organization remotely has the capabilities. Were it not for NASA, little R&D would be performed, key supporting infrastructure would not exist, and new technologies would not be developed because the benefits appropriable by private enterprise are too limited or too widely diffused to attract investment. The fourth category centers on commercial competitiveness. Here, there is much more policy debate about the role of the federal aeronautics enterprise. And the ideological tone of this debate carries over to, and dwarfs and distorts, discussion of the other three areas.

The following discussion highlights the four categories and the related policy debates.

Modernizing a strained air transportation system. Air transportation in the United States has, in a sense, fallen victim to its own popularity. The system is severely strained because of capacity limits, delaying tens of millions of passengers and many billions of dollars in cargo. In the face of growing demand, passenger airlines’ on-time records have been deteriorating. Only slightly more than three-quarters of all flights on major U.S. carriers in 2005 arrived within 15 minutes of being on time. To improve on-time performance records, airlines have extended scheduled flight times. Over short-haul routes (less than 500 miles), air travel is essentially no longer faster than earthbound alternatives: door-to-door travel times amount to between 35 and 80 miles per hour. The Walker Commission calculated that barring transportation system improvements, the delays will cost the U.S. economy $170 billion between 2002 and 2012, with annual costs exceeding $30 billion by 2015.

THE UNITED STATES MUST OVERCOME A SERIES OF MAJOR CHALLENGES–TO THE CAPACITY, SAFETY, AND SECURITY OF THE NATION’S AIR TRANSPORTATION SYSTEM; TO THE NATION’S ABILITY TO COMPETE IN INTERNATIONAL MARKETS; AND TO THE NEED TO REDUCE NOISE AND EMISSIONS.

Yet demand represents only one side of the equation. The air-traffic management system, although generally judged to be safe, reliable, and generally capable of handling today’s traffic flow, largely relies on 1960s technology and operational concepts and resists innovation. The system’s limitations, along with other factors such as airport runway capacity, place severe constraints on future expansion. The skies and landing patterns will become even more cluttered as hundreds of air taxis join the fleets annually during the next decade, thanks to the introduction of relatively inexpensive so-called microjets. In a 2003 report, a National Academies’ committee was emphatic: “Business as usual, in the form of continued, evolutionary improvements to existing technologies, aircraft, air traffic control systems, and operational concepts, is unlikely to meet the challenge of greatly increased demand over the next 25 to 50 years.”

Significant technical hurdles remain:

  • The need to accommodate an increased variety of vehicles and venues. Such aircraft include air taxis, unpiloted aircraft, aircraft that use tilt-rotor propulsion systems to achieve nearly vertical takeoff and landing, “lighter-than-air”aircraft, and other aircraft that do not need runways.
  • Heightened security and reliability of voice, data, and video connections to in-flight aircraft.
  • Increased use of automation and satellites in handling traffic flow.
  • Use of synthetic vision, cockpit display of traffic information, and controller displays to improve awareness of aircraft separation.
  • Systems engineering and real-time information management and communication for moving from local traffic control to regional and nationwide traffic flow control and optimization.
  • Prediction and direct sensing of the magnitude, duration, and location of wake vortices.
  • Safety buffers against monitoring failures and late detection of potential conflicts.

Curtailing environmental degradation. Efforts during the past half century, primarily supported by the federal government, have paid off in significant reductions of both the noise and emissions emanating from turbine engines. But the growth of air traffic over the period has more than offset technological progress. In fact, objections to aircraft noise and emissions have been the primary barriers to building new airports or adding new runways at existing airports. These two steps are key to relieving pressure on the nation’s overburdened air transportation system, simultaneously increasing system capacity and travel speeds.

Technical needs here include:

  • Low-emission combustors to reduce emissions of nitrogen oxide and particulate matter
  • Alternative energy sources
  • Structures and materials to reduce drag and improve aerodynamics
  • Understanding aviation’s effect on climate and the need to balance nitrogen oxide and carbon dioxide emissions
  • Improved dispersion models, which look at how pollutants disperse in, react with, and interact with the atmosphere
  • Standardized methods for measuring particulate emissions
  • Improved engine and airframe noise-reduction technologies
  • Reducing sonic boom to enable a new generation of commercial supersonic transports

Enhancing safety and security. The air transportation system has an excellent safety record. From 2002 to mid-May 2006, U.S. commercial aviation, both passenger and cargo, saw a total of 59 fatalities resulting from eight events, yet carried well more than 2 billion domestic passengers on more than 40 million flights. However, as forecast demand accelerates during the next 25 to 50 years, there is little assurance that historical trends will continue. Indeed, National Transportation and Safety Board Chairman Mark Rosenker released a report in late 2005 suggesting that near-misses between passenger jets at the nation’s most congested airports occur “with alarming frequency.” At least 326 “runway incursions,” close calls that could have led to accidents, occurred at U.S. airports in 2004. Rosenker put much of the blame on the technologies currently in use. Moreover, the 9/11 terrorist attacks did more than show the vulnerabilities of the air transportation system; they focused attention on new homeland security requirements that call for system capabilities not previously anticipated.

Looking forward, the roadmap of safety-related technology needs involves:

  • Fault-detection and control technologies to enhance aircraft airworthiness and resiliency against loss of control in flight
  • Prediction, detection, and testing of propulsion system malfunctions
  • Technologies to reduce fatalities from in-flight fires, post-crash fires, and fuel tank explosions, including self-extinguishing fuels
  • On-board weather and hazard identification
  • Systems using synthetic vision and digital terrain recognition to allow all-weather visibility
  • Technologies to reduce weather-related accidents and turbulence-related injuries
  • Understanding human error in maintenance and air-traffic control
  • Blast-resistant structures and luggage containers
  • More sensitive, accurate, and faster technology for passenger screening
  • Intelligent autopilots able to respond to anomalous flight commands
  • Reduced vulnerability of Global Positioning System guidance

Increasing the performance and competitiveness of commercial aircraft. Several recent reports share the view that European competition, which already has eroded U.S. dominance of commercial large jet sales, threatens one of the nation’s few standouts among value-added exports. The U.S. share of this global market plummeted from 71.l% in 1999 to about 50% today, with the U.S. company Boeing and the European company Airbus now trading the market leader spot from year to year. In 2005, Airbus took orders for more aircraft (1,055) than Boeing (1,002), though Boeing’s aircraft were higher in total value. One positive note is that Boeing’s new 787 Dreamliner appears to be competing well against the Airbus 350. U.S. companies that manufacture military airframes continue to dominate worldwide, in large part because of the sheer size of the Pentagon’s procurement budgets. But these companies rely increasingly on foreign suppliers, particularly those in countries targeted for sales, squeezing the second and lower tiers of the U.S. defense industrial base.

Two indicators of industry health are employment and R&D. Trends in both areas are worrisome. In February 2004, total U.S. aerospace employment hit a 50-year low of 568,700 workers, the majority in commercial aircraft, engines, and parts. This level was more than 57% below the peak of 1.3 million workers in 1989. By the end of 2005, employment had nudged back up to 626,000 workers. Meanwhile, the aerospace share of R&D investments dropped from about 19% of the total in 1990 to only 5% in 2002. The comparable figure in Europe was 7%. Although the United States can obtain advanced aircraft and air-traffic management systems from foreign suppliers if U.S. manufacturers fail to remain competitive, the implications of such dependency are troubling well beyond the clear national security concerns and beyond the aeronautics industry itself. These sectors have the highest economic and jobs multipliers because they draw on a wider variety of other high-value sectors—computers, electronics, advanced materials, precision equipment, and so on—than nearly any other industry.

In terms of providing public goods, the technical issues in this category relate primarily to improving aircraft efficiency and performance. Technological advances may help increase high-technology employment and reduce imports. Other potential positive public externalities include transportation time savings, increased system capacity, reduced energy dependence, reduced environmental impact, and reduced public infrastructure needs. Related technical challenges include:

  • Improved propulsion systems, both the evolution of highbypass turbofan engines burning liquid hydrocarbon fuels and the development of engines using hydrogen as fuel
  • New airframe concepts for subsonic transports, supersonic aircraft, runway-independent vehicles, personal air vehicles, and uninhabited air vehicles
  • Composite airframe structures combining reduced weight, high-damage tolerance, high stiffness, low density, and resistance to lightning strikes
  • High-temperature engine materials and advanced turbomachinery
  • Enhanced airborne avionic systems
  • The application of nanotechnology for advanced avionics and high-performance materials
  • Passive and active control of laminar and turbulent flow on aircraft wings

Advances in each of these areas would be welcome. But given the severity of budget constraints, advancing every area is probably not possible. So where to set priorities? We urge focus on cross-cutting enabling technologies and on maintaining and upgrading NASA’s unique national testbed faculties. Some technologies under development will have application primarily in one of the four major categories described above. Other technologies, crucial in more than one area, play enabling roles across the board. The interrelation is such that improvement or lack of it in each technology can affect improvements in one or more of the others. The following general technical capabilities or enabling technologies are particularly central:

Modeling and simulation. A 2003 National Research Council report provides a detailed set of recommendations that would provide “the long-term systems modeling capability needed to design and analyze evolutionary and revolutionary operational concepts and other changes to the air transportation system.”Modeling and computer simulation are also significant factors in lowering manufacturing costs, which could help make commercial supersonic aircraft economically successful. Taking a broader view, modeling and simulation, among other information technology applications, will contribute not only to automating and integrating the air transportation system but also to reducing aviation transit time, fatal accident rates, noise and emissions, and the timeto- market product cycle times for new technologies.

Human factors. In aviation safety, human factors are critical and need more support. Air traffic controllers are central to the efficiency and safety of the airspace, especially during periods of inclement weather and poor visibility. Unfortunately, the stereotypical controller, harried and perhaps burned out, has a significant basis in aeromedical research reality. In addition, pilot errors, often related to fatigue, regularly lead to fatal crashes, including an American Connection commercial flight in late 2004 that left 13 dead. Such errors are particularly problematic in general aviation, leading to, for example, the accidents that killed U.S. Senator Paul Wellstone and John F. Kennedy Jr. With the expected increased automation in both individual aircraft and the total air transportation system, significantly better human interfaces and decision-aid technologies will be required to deal with the decisionmaking complexities and data overloads such systems will generate. The Walker Commission, concurring that human factors research could help “enhance performance and situational awareness . . . in and out of the cockpit,” predicted it would be a “primary contributor” to tripling the capacity of the U.S. air transportation system by 2025. In addition, research on the impact on people (and structures) of the sonic boom pressure waves created by supersonic flight is needed to inform both vehicle design and safety regulations.

Distributed aeronautics communications networks. In the final analysis, the most complex problem of all may well be the integration of national and worldwide air, space, and ground communication networks. A highly automated, high-throughput, secure, and accident-free national airspace system will be extraordinarily information-dense and highly geographically (and spatially) distributed and will meet decisionmaker needs for essentially real-time data analysis and presentation with worldwide on-demand availability. Technologies currently in use have only just dented the needs. To help in moving ahead, the National Academies’ Committee for the Review of NASA’s Revolutionize Aviation Program recommended exploring “revolutionary concepts” related to distributed air-ground airspace systems, including the distribution of decisionmaking between the cockpit and ground systems and reorganization of how aircraft are routed, with significant implications for airspace usage and airport capacity.

Even if NASA aeronautics program expenditures were stabilized and focused along these lines, managers of ARMD will continue to face severe constraints. The first limitation is high fixed personnel costs. Total expenditures (salaries and fringe benefits) for aeronautics workers, including large contingents of civil service personnel as well as contractors, were slightly more than $400 million in fiscal 2006. This total is in the neighborhood of 45% of the aeronautics budget, even after assuming that NASA-projected workforce reductions occur.Yet even that assumption is in jeopardy, because the latest congressional authorization of NASA’s budget restricted the agency’s ability to reduce its workforce.

The second limitation is that certain fixed administrative costs incurred by the agency arise from its responsibilities as defined in the Space Act, obligating NASA to maintain certain critical national facilities (wind tunnels and the like) and aeronautics core competencies. Overhead such as general administrative costs (G&A) are normally determined for each center and applied as a percentage of labor cost involved in the program at that center. G&A costs in the proposed 2007 budget total more than $250 million alone at the four major aeronautics-related NASA labs: Ames, Glenn, Langley, and Dryden. G&A costs at the labs are high because of the obligation to support their aging facilities and equipment.

A third limitation is that an ever-growing part of NASA’s extramural program is earmarked by Congress for particular projects. In the past decade, the number of earmarks in NASA’s budget exploded more than 30-fold to 198. Earmarks totaled $568.5 million in fiscal 2006, fully eight times more in dollar terms than a decade before.

The issue is not so much whether any particular earmarked program or institution has technical merit or will substantially help a favored local constituency. Many surely do in isolation. But when it comes to effectively managing technology and ensuring maximum returns on public investments, NASA is rapidly losing the flexibility to optimize— by field, or level of risk, or potential users and suppliers, or time horizons, or national systemic needs, or core competencies, and so on—across its R&D portfolio. In our view, this risks making NASA’s aeronautics activities not so much a coordinated strategic national portfolio but a hodgepodge collection of unrelated pet projects.

In short, after earmarks, personnel costs, and fixed G&A costs, NASA for fiscal year 2006 was left with roughly the same amount of money for discretionary R&D spending that several multinational high-technology firms each spend per week on R&D. At times, the results in the research trenches seem almost surreal. Langley administrators recently sent a memo to employees cutting all spending for gas on agency-related travel and for new wireless connectivity, as well as pushing back—again—roof repairs and badly needed information technology maintenance and upgrades. Outdated computers, no more wireless connectivity, and bad roofs at one of the nation’s premier research institutions?

To us, this is stunning neglect of the national interest in the future of aeronautics technologies.At current and proposed funding levels,NASA and the nation cannot hope to come close to fulfilling national needs in the face of an already strained air transportation system; fierce and increasing international competition in aircraft markets; the environmental challenges of noise, emissions, and fuel efficiency; and demands for improved air safety and homeland security. NASA’s ARMD is the nation’s only organizationally and technically capable option for overall leadership in aeronautics technologies.Unfortunately, it is largely hidden from public view, structurally, financially, and politically buried in a space agency on a mission to Mars. How many additional hundreds of millions of delayed air travelers, or how many more national commissions warning about the perilous future of U.S. aeronautics, will it take to get policymakers to put the A back in NASA?

Nuclear Deterrence for the Future

The most significant event of the past 60 years is the one that did not happen: the use of a nuclear weapon in conflict. One of the most important questions of the next 60 years is whether we can repeat this feat.

The success that we have had in avoiding the construction and deployment of nuclear weapons by a large number of nations has been far better than anybody anticipated 40 or 50 years ago. Likewise, the fact that nuclear weapons have not been used is rather spectacular.

The British scientist, novelist, and government official C.P. Snow was quoted on the front page of the New York Times in 1960 as saying “unless the nuclear powers drastically disarmed, thermonuclear war within the decade was a mathematical certainty.” I think he associated with enough scientists and mathematicians to know what mathematical certainty was supposed to mean. We now have had that mathematical certainty compounded more than four times without any use of nuclear weapons.

When Snow made that statement, I did not know anyone who thought it was outrageous or exaggerated. People were really scared. So how did we get through these 60 years without nuclear weapons being used? Was it just plain good luck? Was it that there was never any opportunity? Or were there actions and policies that contributed to this achievement?

The first time when it seemed that nuclear weapons might be used was during the Korean War, when U.S. and South Korean troops retreated to the town of Pusan at the southern tip of Korea. The threat was serious enough that Britain’s prime minister flew to Washington with the announced purpose of persuading President Truman not to use nuclear weapons in Korea.

The Eisenhower administration, or at least Secretary of State John Foster Dulles, did not like what he called the taboo on the use of nuclear weapons. He said “somehow or other we must get rid of this taboo on nuclear weapons. It is based on a false distinction.” And the president himself said “if nuclear weapons can be used for purely military purposes on purely military targets, I don’t see why they should-n’t be used just as you would use a bullet or anything else.” The United States even announced at a North Atlantic Treaty Organization (NATO) meeting that nuclear weapons must now be considered to have become conventional.

U.S. policy had changed considerably by the time Lyndon Johnson became president. In 1964 he said, “Make no mistake. There is no such thing as a conventional nuclear weapon. For 19 peril-filled years no nation has loosed the atom against another. To do so now is a political decision of the highest order.”

Those 19 peril-filled years are now 60 peril-filled years. President Kennedy started, Johnson continued, and Secretary of Defense Robert McNamara spearheaded a powerful effort to build up enough conventional military strength within the NATO forces so that they could stop a Soviet advance without the use of nuclear weapons. Both Kennedy and Johnson had a strong aversion to the idea of using nuclear weapons.

During the 1960s, the Soviets officially ridiculed the idea that there could be a war in Europe that did not instantly— in their words, automatically—go nuclear, but their actions were very different from their public announcements. They spent huge amounts of money developing conventional weaponry, especially conventional air weaponry in Europe. This investment would have made no sense if a European war were bound to become nuclear, especially from the outset. It seems to me that the Soviets recognized the possibility that the world’s nations might get along without actually using nuclear weapons, no matter how many of them were in the stockpiles.

I find it noteworthy that as far as I know, the United States did not seriously consider using nuclear weapons in Vietnam. Of course, I’ll never really know what was in Richard Nixon’s or Henry Kissinger’s mind, but at least we know that they were not used.

Remarkably, Golda Meier did not authorize the use of Israel’s nuclear weapons when the Egyptians presented excellent military targets. At one point, two whole Egyptian armies were on the Israeli side of the Suez Canal, and there were no civilians anywhere in the vicinity. This was a perfect opportunity to use nuclear weapons at a time when it was not clear that Israel was going to survive the war. And yet they were not used. We can guess at some of the reasons, but I think it was Meier’s long-range view that it would be wise to maintain the taboo against the use of nuclear weapons because eventually any country could become a nuclear target.

When Great Britain was defending the Falkland Islands, it had several opportunities when nuclear weapons might have been effective, but Margaret Thatcher decided that they were not an option. The Soviets fought and lost a degrading and demoralizing war in Afghanistan without resorting to nuclear weapons. Some observers have argued that the Soviets had no viable targets; I believe that they did have opportunities but nevertheless decided against using nuclear weapons. I believe that the underlying rationale against their use was the same for these countries as it was for Lyndon Johnson: The many peril-filled years in which nuclear weapons were not used had actually become an asset of global diplomacy to be treasured, preserved, and maintained.

Maintaining the streak

Will the world be able to continue this restraint as more nations acquire nuclear weapons? Since Lyndon’s Johnson statement, India and Pakistan have developed nuclear weapons. Even in my lifetime, I expect to see a few more countries do so. How do we determine whether these new nuclear powers share the commitment to avoid the use of these weapons?

From a U.S. perspective, two ideas are worth considering. The country should reconsider its decision not to ratify the Comprehensive Test Ban Treaty. It was an opportunity to have close to 180 nations at least go through the motions of supporting the principle that nuclear weapons are subject to universal abhorrence. Nominally, the treaty was about testing, but I believe that it could have served a more fundamental purpose by essentially putting another nail in the coffin of the use of nuclear weapons.

I also believe that even if U.S. leaders believe that there are circumstances in which they would use nuclear weapons, they should not talk about it. And if they want to develop new weapons, they should do so as quietly as possible—even avoiding congressional action if possible. The world will be less safe if the United States endorses the practicality and effectiveness of nuclear weapons in what it says, does, or legislates.

The National Academy of Sciences Committee on International Security and Arms Control (CISAC), the Ford Foundation, the Aspen Institute, and other institutions have sponsored numerous international meetings on arms control, and these meetings have almost always included representatives of India and Pakistan. I believe that it was extremely important for them to hear at firsthand from U.S. scientists and political leaders about the dangers associated with the use of nuclear weapons. I believe that India and Pakistan also learned from watching Cold War leaders forego the use of those weapons because they feared where it might lead. Because I think that India and Pakistan have absorbed some of the lessons of this experience, I worry less about what might develop in an India-Pakistan standoff.

Now it is important to teach the Iranians that if they do acquire nuclear capability, it is in their national interest to use such weapons only as a means to deter invasion or attack. The president of Iran was recently quoted as saying that Iran still intended to wipe Israel off the face of the earth. My guess is that if they think about it, they are not going to try to do it with nuclear weapons. Israel has had almost a half century to think about where to store its nuclear weapons so that it would be able to launch a coun-terattack if its existence is threatened. Iran does not want to invite a nuclear attack. Every Iranian should be aware that the use of nuclear weapons against Israel or any other nuclear power is an invitation to national suicide. It is important that not only a few intellectuals in Iran understand this, but that people throughout the country share this awareness. I would like to see a delegation of Iranians participating in future CISAC meetings.

All new nuclear powers would benefit from knowing that it took the United States 15 years after the development of nuclear weapons to begin to think about the security and custody of the weapons themselves. This did not happen until Robert McNamara had his eyes opened by a study done by Fred Ikle of the RAND Corporation that revealed that U.S. nuclear weapons did not even have combination locks on them, let alone any police dogs to guard them on German airfields. McNamara initiated what became known as “permissive action links.” It took about four years to have the permissive action links developed to his satisfaction and then finally installed on the land-based warheads. If the Iranians do develop nuclear weapons, it is critical that it not take them 15 years to think about the custodial problems. Will control be granted to the army, navy, air force, or palace guard? Will security be adequate at storage facilities? We have witnessed enough instability across the globe to know that governments fail and that the branches of the armed forces sometimes take different sides in civil conflicts. Iran needs to think through what will happen to the weapons in the event of a government failure. Will some part of the government or military be able to maintain control, or will they watch Israeli commandos arrive to take charge of the weapons?

A nuclear Iran would need to act rapidly on questions of security, custody, and the technological capacity to disarm the weapons if they lose control of them. CISAC could be of enormous help to the Iranians in relaying the lessons from decades of U.S. experience in learning how to manage custody of nuclear weapons.

An even more important task will be to prepare for the extremely remote possibility that a terrorist group could acquire such weapons. It will be essential but very difficult to persuade them that nuclear weapons are valuable primarily as means of persuasion and deterrence, not destruction.

About 20 years ago, I began thinking about how a terrorist group might use a nuclear weapon for something other than just blowing up people. A good example occurred during the Yom Kippur war of 1973. The United States resupplied Israel with weapons and ammunition, but the United States was not allowed to fly from European NATO countries or to refuel its planes in Europe. All of the refueling was done in the Azores. It struck me then that if I were a pro-Palestinian terrorist and had a nuclear weapon, I would find a way to make clear that I had it and that I would detonate it near the air fields in the Azores if the United States did not stop landing planes loaded with ammunition for Israel. This strategy had a number of fallback positions: If it failed to deter the United States from refueling in the Azores, it might deter Portugal, which owned the Azores, from allowing the refueling to take place, and if that failed, it might deter the individuals working at the airport and doing the refueling. If we ever have to face the prospect of nuclear-armed terrorists, I want them to be thinking along these strategic lines rather than thinking about attacking Hamburg, London, or Los Angeles.

My hope for CISAC is that it will see its mission broadly: educating itself, U.S. leaders, and anyone who will be in a position to influence the decision to use a nuclear weapon. Thinking of extending this mission to Iran is difficult, and to North Korea even more so. I think it is important to keep in mind that if terrorists do acquire nuclear weapons, it would probably be by constructing them after acquiring fissile material, and that means that there is going to be quite a high-level team of scientists, engineers, and machinists of all kinds working over a significant period of time, probably in complete seclusion from their families and jobs with nothing to do but think about what their country and other countries are going to do once a bomb is ready. And I think they will probably come to the conclusion that the last thing they want to do is waste it killing Los Angelenos or Washingtonians. I believe they will think about sophisticated strategic ways to use a weapon or two or three if they have them.

This means we may be living in a world for the next 60 years in which deterrence is just as relevant as it was for the past 60 years. One difference will be that the United States will find itself being deterred rather than just deterring others. Although the United States likes to think of itself as always in the driver’s seat, in reality it was deterred by Soviet power from considering the use of nuclear weapons in several instances. I believe that the United States did not seriously consider rescuing Hungry in 1956 and Czechoslovakia in 1968 because it was sufficiently deterred by the threat of nuclear war.

My hope is that the United States will continue to succeed in deterring others from using nuclear weapons, and that others will succeed in deterring the United States.

Preventing Catastrophic Chemical Attacks

A terrorist attack on a single 90-ton chlorine tank car could generate a cloud of toxic gas that travels 20 miles. If the attack took place in a city, it could kill 100,000 people within hours. Now multiply that nightmare by another 100,000. That’s the approximate number of tank cars filled with toxic gases shipped every year in the United States.

We are vulnerable to catastrophic acts of chemical terrorism such as this plausible scenario. There are 360 sites sprinkled across the United States where, if terrorists attacked, more than 50,000 people at each could be harmed or killed. Many of them are in heavily populated areas of New Jersey, New York, Pennsylvania, Texas, Louisiana, and California. Yet the federal government has made no progress whatever in addressing this threat in the five years since the 9/11 attacks.

Private industry has taken a few steps to make us more secure from chemical terrorism. The American Chemistry Council (ACC), which represents many of the nation’s chemical manufacturing plants, has required its member facilities to undertake a set of security initiatives. They have invested several billion dollars in security upgrades since 2001. Although these private-sector investments may help with both safety (prevention of accidents) and security (prevention of attacks), two key shortcomings keep the ACC’s efforts from providing the country with the level of security commensurate with the threat. First, the facilities that have invested in improved security number only 1,100, a small fraction of the 15,000 facilities that store or produce large amounts of hazardous chemicals. Even worse, they are not, by and large, the facilities that would injure or kill the largest numbers of people if they were attacked. Second, the stark reality is that a resourceful terrorist group can compromise any chemical facility or shipment that it puts its mind to.

The government finally appears to have become aware of the frightening state of our chemical plant security (although, as explained later, not of chemical transportation security). President Bush supports bills in the House and Senate aimed at hardening security at chemical plants based on the risks they pose. The legislation calls for the Department of Homeland Security (DHS) to place chemical plants into different tiers according to the results of offsite consequence analyses and set tier-dependent security standards. In turn, chemical plants would be required to assess their own vulnerabilities and choose how to improve security, if need be, to satisfy these standards. Facilities in the highest-risk tier would begin first.

Unfortunately, the bills do not go far enough. We need an approach that is both risk-based (so that investments in protection are made at the most dangerous facilities) and focused on the use of alternative, much safer chemicals (so that true prevention can be achieved.) The bills focus only on the degree of risk posed by a plant, not on safer processes.

The three main culprits

Chemical facilities vary widely in their attractiveness to terrorists. Obviously, measures should be aimed at plants and rail shipments that could generate mass casualties if attacked. As a result of the Clean Air Act, chemical facilities have already analyzed their potential worst-case offsite consequences from a chemical accident. That has made it possible to identify the most dangerous needles in the haystack of chemical facilities and shipments. We know that the most dangerous chemicals are not the flammable ones, such as liquid propane gas. The larger danger comes from chemicals that, on release, form heavier-than-air clouds that can travel 10 to 20 miles. We also know that there are three main culprits: chlorine (used in the production of building materials); anhydrous ammonia (used for agricultural fertilizer); and, worst of all, hydrofluoric acid (used in transportation fuels.) There may be several plants using less common chemicals that are equally dangerous. But their off-site consequence analysis data are exempt from the Freedom of Information Act and are not on the Internet. That makes it difficult for the general public to identify these facilities, but it makes it difficult for the terrorists, too.

WE NEED AN APPROACH THAT IS BOTH RISK-BASED (SO THAT INVESTMENTS IN PROTECTION ARE MADE AT THE MOST DANGEROUS FACILITIES) AND FOCUSED ON THE USE OF ALTERNATIVE, MUCH SAFER CHEMICALS (SO THAT TRUE PREVENTION CAN BE ACHIEVED.)

How can we prevent or mitigate an attack on sites with these particular chemicals? One approach is to harden security at the facility. Although installing fences and security guards and performing background checks on employees are marginally useful, they will not prevent a suicide attack by a group of terrorists using several large trucks or a small airplane. Another approach is to place water curtains around large storage tanks. These are essentially elaborate sprinkler systems that attempt to force the gas to the ground before it can form a toxic cloud. These systems can be effective against the leaks or slow releases that are typical of many industrial accidents, but they will not help forestall a massive release resulting from a truck or airplane crashing into a storage tank, which is the most likely scenario for a terrorist attack. Moreover, these water curtains, which are activated by gas detection, probably would not work after such an attack.

With neither security hardening nor water curtains offering a robust response to a terrorist attack, we must resort to measures that prevent a toxic cloud from being released in a heavily populated area. Products and processes need to be redesigned so that they cannot be the catalyst for a deadly plume.

Particularly problematic are the nation’s 148 oil refineries. Of these, 50 use hydrofluoric acid in their alkylation process, which provides high octane while maintaining low sulfur and nitrogen content. Although only 4% of the nation’s hydrofluoric acid is used by these 50, they top the list of most dangerous chemical facilities because the scale of their operations is immense. Some refineries store hundreds of thousands of pounds of hydrofluoric acid, which could seriously harm or kill hundreds of thousands of people. Collectively, these 50 refineries have more than 10 million pounds of hydrofluoric acid on their premises.

Fortunately, there is an obvious fix. Quite simply, we need to discontinue the use of unmodified hydrofluoric acid in the alkylation process. We are on the way to doing that. The other 98 refineries use two safer alternatives. First, the alkylation process can be converted from using hydrofluoric acid to using sulfuric acid, which does not form a dense cloud on release. Indeed, 86% of the new alkylation units introduced in the 1990s used sulfuric acid, which also leads to a reduction in the fractionation capacity required. The conversion cost is $20 million to $30 million per refinery, but a good-sized oil refinery refines the equivalent of approximately one billion gallons of gasoline annually. A second approach is to modify the hydrofluoric acid with an agent that causes about three-quarters of the acid in the cloud to fall to the ground.

Dealing with chlorine and anhydrous ammonia is much more difficult. These two chemicals are used in approximately half of the 15,000 facilities with large amounts of dangerous chemicals. Although some water and sewage treatment plants have successfully substituted hypochlorite bleach or ultraviolet light for chlorine, these treatment plants represent only 6% of the nation’s chlorine use. Similarly, some paper and pulp manufacturers have switched to less dangerous alternatives such as chlorine bleach, but the paper industry uses only 5% of total industrial chlorine.

The place to focus efforts is polyvinyl chlorine (PVC) plastics manufacturing, which consumes approximately 40% of the nation’s chlorine; another 40% is used in the production of a wide variety of organic and inorganic chemicals. 75% of PVC is used in buildings, primarily in piping, siding, and roofing membranes. Because of environmental hazards associated with PVC (dioxin is generated during production and PVC releases deadly gases during a fire), there are several alternatives. For example, piping can be made of cast iron, concrete-vitrified clay, and high-density polyethylene. Siding alternatives include fiber-cement board, stucco, brick, or polypropylene.

Similarly, although the anhydrous ammonia in some pollution-control processes has been replaced by urea or aqueous ammonia, 72% of ammonia use is in agricultural fertilizers. Several thousand facilities make fertilizers. Unlike oil refineries, these facilities tend to be small and near farmland rather than near large population centers. Ammonia-free alternatives that use urea or liquid nitrogen are in the fertilizer marketplace.

Transportation hazards

Chlorine and anhydrous ammonia are not stored in the vast quantities that hydrofluoric acid is at oil refineries, and the biggest threat from these chemicals may be during their transport in tank cars. To prevent these well-marked tank cars from injuring or killing tens of thousands of people, hazardous shipments must be routed away from densely populated areas.

But when Washington, DC, tried to do just that in January 2005 by passing a bill preventing hazardous shipments in its downtown, it was sued by the rail company CSX Transportation, which got help, incredibly enough, from DHS. Although a federal judge ruled in favor of the city in April 2005, a three-judge appellate panel reversed this decision a month later.

Why does CSX oppose the city’s desire to make its citizens safer? Rerouting in this oligopolistic industry, where many customers have one option, would sometimes require handoffs of shipments between companies. If that practice became widespread, it would eat into rail industry profits. Sen. Joseph Biden (D-DE) has introduced legislation that would force the rerouting of hazardous rail shipments away from likely terrorist targets. It lies dormant.

Congress must move beyond the approach embedded in the pending legislation, because it would provide only the illusion of security. To achieve true security, legislation should require the 50 oil refineries to convert to either sulfuric acid or modified hydrofluoric acid. It should also eliminate hazardous shipments of dense cloud–forming toxic gases in urban areas. Rather than shift the entire burden for protecting these gases to the transportation sector, we need to dramatically reduce the demand for dense cloud–forming toxic gases such as chlorine and anhydrous ammonia by requiring the use of safer technologies in cases where such alternatives already have a proven track record, such as ammonia-free fertilizers and PVC-free building products.

This nonvoluntary approach will level the playing field, allowing the chemical industry to pass these costs on to consumers. The Chemical Security and Safety Act bill introduced in March 2006 by Sen. Frank Lautenberg (D-NJ), which would require certain plants to use safer technologies if practical alternatives exist, coupled with Biden’s legislation banning hazardous rail shipments in densely populated areas, would go a long way toward reliably reducing the possibility of a catastrophic chemical attack.

Unlike catastrophic biological or nuclear terror attacks, catastrophic chemical attacks can be avoided by removing the targets. The White House and Congress must choose which should prevail: special interests or the safety of our citizens.

A New Science Degree to Meet Industry Needs

All of us are aware of urgent calls for new and energetic measures to enhance U.S. economic competitiveness by attracting more U.S. students to study science, mathematics, and engineering. In the case of scientists, one reason for the lack of science-trained talent prepared to work in industry (and some government positions) is that the nation does not have a graduate education path designed to meet industry’s needs. A college graduate with an interest in science has only one option: a Ph.D. program, probably followed by a postdoctoral appointment or two, designed to prepare someone over the course of about a decade for a university faculty position. If the need for scientists to contribute to the nation’s competitiveness is real, the nation’s universities should be offering programs that will prepare students in a reasonable amount of time for jobs that will be beneficial to industry. What is needed is a professional master’s degree.

The demand for more science-trained workers appears to be real. In 2005, 15 prominent business associations led by the Business Roundtable called for whatever measures are necessary to achieve no less than a 100% increase in the number of U.S. graduates in these fields within a decade. In 2006, a panel of senior corporate executives, educators, and scientists appointed by the National Academies called for major national investments in K-12 science and mathematics, in the education of science and math teachers, and in basic research funding to address what it saw as waning U.S. leadership in science and technology. This National Academies report was endorsed by leading education associations and served as a basis for several legislative proposals (such as the Bush administration’s American Competitiveness Initiative) now moving through the Congress. Supportive articles and editorials have dominated journalistic coverage of these arguments.

Few would contest the general proposition that it would be highly desirable for the nation to encourage more of its students to become knowledgeable about science, mathematics, and technology—at all levels of education, from K12 through graduate school. The current century, like the past half-century, is one in which all citizens, no matter their level of education, need to possess considerable understanding of science and technology and to be numerate as well as literate. Indeed, it would be reasonable to argue that such knowledge is now close to essential if young Americans are to become knowledgeable citizens who are able to understand major world and national issues such as climate change and biotechnology that are driven by science and technology, even if their own careers and other activities do not require such knowledge. Efforts to improve math and science teaching at the K-12 and university levels make a great deal of sense.

So too do calls for substantial federal support for basic scientific research. Such research is a public good that can produce benefits for all, yet it is unlikely to be adequately supported by private industry because its economic value is so difficult for them to capture. Moreover, there is considerable truth in the various reports’ claims that support for basic research in the physical sciences and mathematics has lagged well behind the dramatic increases provided for biomedical research.

The key question, though, is not whether the goals are appropriate but whether some of the approaches being widely advocated are the best responses to claimed “needs” for scientists and engineers with the capabilities needed to maintain the competitiveness of the U.S. economy. Improving the quality of U.S. K-12 education in science and math is indeed a valuable mission. But if the proximate goal is to provide increased numbers of graduate-level scientists of the kinds that nonacademic employers say they want to hire, a focus on K-12 is necessarily a very indirect, uncertain, and slow response.

Increased federal funding for basic research also is a worthwhile contribution to the public good, but its effects on graduate science education would be primarily to increase the number of funded slots at research universities for Ph.D. students and postdocs who aspire to academic research careers. Extensive discussions with nonacademic employers of scientists indicate that they do wish to recruit some Ph.D.-level scientists (more in some industries, fewer in others), but also that they value the master’s level far more highly than do most U.S. research universities.

In addition to strong graduate-level science skills that a strong master’s education can deliver, employers express strong preferences for new science hires with

  • broad understanding of relevant disciplines at the graduate level and sufficient flexibility in their research interests to move smoothly from one research project to another as business opportunities emerge
  • capabilities and experience in the kind of interdisciplinary teamwork that prevails in corporate R&D
  • skills in computational approaches
  • skills in project management that maximize prospects for on-time completion
  • the ability to communicate the importance of research projects to nonspecialist corporate managers
  • the basic business skills needed to function in a large business enterprise

In light of employers’ stated needs, there appears to be a yawning gap in the education menu. U.S. higher education in science, often proudly claimed as the world leader in quality, is strong at the undergraduate and doctoral levels yet notably weak at the master’s level.

No one planned it this way. The structure of the modern research university is a reasonable response to the environment created by the explosive growth of federal research in the decades after World War II. But that period of growth is over, the needs of industry have evolved and become more important, and now the nation faces a gap that has significant negative implications for the U.S. science workforce outside of academe. That gap can be filled with the creation of a professional science master’s (PSM) degree designed to meet the needs of today and of the foreseeable future.

For at least the past half-century, even outstanding bachelor’s level graduates from strong undergraduate science programs have been deemed insufficiently educated to enter into science careers other than as lowly “technicians.” Over this period, rapid increases in federal support for Ph.D. students (especially as research assistants financed under federally supported research grants) propelled the Ph.D. to become first the gold standard and then the sine qua non for entering a science career path. More recently, and especially in large fields such as the biomedical sciences, even the Ph.D. itself has come to be seen as insufficient for career entry. Instead, a postdoc of indeterminate length, also funded via federal research grants, is now seen as essential by academic employers of science Ph.D.s.

Over the same period, the average number of years spent in pursuit of the Ph.D. lengthened in many scientific fields. More recently, the number of years spent in postdoc positions has also increased. The result has been a substantial extension of the number of years spent by prospective young scientists as graduate students and postdocs. Postgraduate training is now much longer for scientists than for other professionals such as physicians, lawyers, and business managers.

The lengthening of time to Ph.D. and time in postdoc coincided with deteriorating early career prospects for young scientists. Indeed, many believe that the insufficiency of entry-level career positions for recent Ph.D.s was itself an important cause of the lengthening time to Ph.D. and lengthening postdoc periods. As Ph.D.-plus-postdoc education became longer and career prospects for those pursuing them more uncertain, the relative attractiveness of the Ph.D. path in science waned for many U.S. students, even those who had demonstrated high levels of achievement as undergraduate science majors.

COMPETITIVENESS HAWKS WANT TO EXPAND THE SCIENTIFIC WORKFORCE. PROGRAMS TO TRAIN MORE PEOPLE WITH PROFESSIONAL SCIENCE MASTER’S DEGREES COULD HELP.

Yet there was this odd gap. Had the same talented students chosen to pursue undergraduate degrees in engineering, they would have had the option of pursing one of the high-quality engineering master’s degrees that are highly regarded by major engineering employers. But there was no such alternative graduate education path for those who would have liked to pursue similar career paths in science.

Estimates by the National Science Board suggest that surprisingly small proportions (well under one-fifth) of undergraduate majors in science continue on to any graduate education in science. This low level of transition to graduate education has prevailed during the same period that numerous reports have been sounding alarms about the insufficiency of supply of U.S.-trained scientists.

What has happened in the sciences, though not in engineering, is that as heavy research funding has made the Ph.D. the gold standard, the previously respectable master’s level of graduate education had atrophied. Indeed, many graduate science departments have come to see the master’s as a mere steppingstone to the Ph.D. or as a low-prestige consolation prize for graduate students who decide not to complete the Ph.D. At least some members of graduate science faculties came to look down their collective noses at the master’s level, and some graduate science departments simply eliminated the master’s degree entirely from their offerings.

The PSM degree, a rather newly configured graduate science degree that has been supported by numerous U.S. universities with financial support from the Alfred P. Sloan Foundation and the Keck Foundation, was designed to meet strongly expressed desires of nonacademic science employers for entry-level scientists with strong graduate education in relevant scientific domains, plus the knowledge they would need to be effective professionals in nonacademic organizations. In only a few years, the number of PSM degrees has grown from essentially 0 to over 100 (and at over 50 different campuses in some 20 states). They are by no means clones of one another, but they do generally share many core characteristics.

They are two-year graduate degrees, generally requiring 36 graduate credits for completion. The credits are course-intensive, with the science and math courses at the graduate level. In addition, many PSM degrees offer cross-disciplinary courses (such as bioinformatics, financial mathematics, industrial mathematics, biotechnology, and environmental decisionmaking). Most PSM curricula include research projects rather than theses; some of the projects are individual, some are team-based. Courses in business and management are also common. Depending on the focus of the PSM degree, there may also be courses offered in patent law, regulation, finance, or policy issues. Finally, many PSM programs provide instruction in other skills important for nonacademic employment, such as communication, teamwork, leadership, and entrepreneurship.

One of the most important elements of nearly all PSM degrees is an internship with an appropriate science employer; most of these take place during the summer between the first and second year. These offer PSM students the chance to see for themselves what a career in nonacademic science might be like, and they likewise afford employers the opportunity to assess the potential of their PSM interns as future career hires.

Many industry and government scientists have been enthusiastic supporters of emerging PSM degree programs in fields relevant to their own activities. They serve as active advisors to PSM faculty, offering guidance on the science and nonscience curricular elements. Over 100 employers have offered PSM students paid internships, and many have mentored them in other ways. Employers also often provide tuition reimbursement to their own employees who wish to enhance their own scientific skills by undertaking a PSM degree while still employed full-time. Employers also often serve as champions for the PSM initiatives with university administrators and state and local officials.

Perhaps most important, employers have been offering attractive entry-level science career paths to PSM graduates. Data are incomplete, but since 2002 we know that at least 100 businesses have hired PSM graduates, with good starting salaries by the standard prevailing for scientists: generally in the $55,000 to $62,000 range. In addition, over 25 government agencies have hired PSM graduates, starting them at $45,000 to $55,000. Hiring employers indicate that they value PSM graduates’ scientific sophistication, but also their preparation to convey technical information in a way that is comprehensible to nontechnical audiences and more generally to work effectively with professionals in other fields such as marketing, business development, legal and regulatory affairs, and public policy.

Meanwhile, faculty involved in PSM programs have found the students to be highly motivated additions to their graduate student numbers. The programs have also facilitated valuable faculty contacts with business, industry, and government. Finally, at the national level, the rapidly increasing PSM movement has begun to contribute efficiently and nimbly to U.S. science workforce needs.

PSM curricula are configured by their faculty leaders to respond to the human resource needs expressed by nonacademic employers of scientists. In the fast-changing scene of scientific R&D, the PSM degrees are attractively agile. Universities that seek to contribute to economic advance in their regions see the PSM degrees as responsive to nonacademic labor markets for science professionals in ways that are quite attractive to science-intensive employers. Finally, as two-year graduate degrees, PSM programs are “rapid-cycle” programs that can respond quickly to calls for increased numbers of science professionals.

If PSM degrees produce science-educated professionals with capacities that nonacademic employers value, why have they not yet been embraced by all universities with strong science graduate programs? Are there reasons why one might expect some faculties to be skeptical or negative about such new degrees?

There is, first, inevitable inertia to be overcome, rendered more powerful because of the diminished status of master’s science education over the past decades. Nonetheless, there have been numerous energetic and committed faculty members who have perceived a strong need for this kind of graduate science education. For them and others, however, the incentive structures do not generally reward such efforts. As has often been noted, research universities and federal funding agencies generally reward research—publications, research grants and the overheads that accompany them, and disciplinary awards—rather than teaching, and certainly tenure decisions relate primarily to research achievements. Master’s-level students themselves often are seen as contributing little to faculty research activities, since their focus is primarily on graduate-level coursework rather than working as research assistants on funded research grants.

One difference among research universities may be the extent to which they envision their role as contributing directly to the economic advancement of their region or country. Among the leaders in PSM innovation and growth have been a number of prominent public and/or land-grant research universities such as Georgia Tech and Michigan State. From their early days, these and similar institutions have seen themselves as engines of economic prosperity, and important parts of their financial resources come from state legislatures that consider such economic contributions to be essential. One can also think of a number of leading private research universities that include regional economic prosperity among their goals, and it is notable that some of these universities have also pursued PSM degree programs.

With over 100 PSM degrees in operation or development around the country and the pioneer programs of this type generally prospering, one could easily conclude that there has been at least a proof of concept. Still, the programs are mostly quite new and relatively small, and hence the numbers of PSM graduates are still modest.

The challenge over the coming few years is to move the PSM concept to scale. This will not be easy, although there is reason for optimism. Ultimate success will depend on recognition by both government science funders and universities of the odd gap that prevails in U.S. graduate science education, as well as on continuation of the attractive early career experiences of PSM graduates and enthusiasm for their capabilities on the part of science-intensive employers.

The recent series of reports urging action to encourage more U.S. students to study science and mathematics could be well answered by support for PSM initiatives. In addition to the large amount of energy and money the nation might be devoting to convince more teachers and young people to pursue undergraduate education in science and math, it would also make a great deal of sense to focus attention on the large number of science majors who are already graduating from college and yet are deciding not to continue toward graduate education and careers in science. The PSM initiatives currently underway at over 50 U.S. universities offer an alternative pathway to careers in science that might literally transform this situation, and one that has real prospects for near-term success.

Ethics and Science: A 0.1% Solution

Science has an ethics problem. In South Korea, Woo Suk Hwang committed what is arguably the most publicized case of research misconduct in the history of science. The range of Hwang’s misconduct was unusual but not extraordinary. He misjudged the ethical challenges presented by a newly developing field of research, he paid insufficient attention to accepted standards of responsible conduct, and he had a role in the fabrication of many key research findings. What made this case extraordinary was that it involved human embryonic stem cell research, a field of inquiry that is being watched more closely by the global public than perhaps any before it. The impact of this scandal is profound for Hwang, for his country, for all of science, and for stem cell research in particular.

The United States is not immune to cases of research misconduct. In one of several examples in 2005, Paul Kornak, a researcher with the Veterans Administration in Albany, New York, admitted that he had forged medical records. The forgeries made it possible for individuals to enter drug trials for which they were not qualified, and one of those individuals subsequently died, apparently as a result of his participation. Although cases such as this receive limited media attention, they deserve our attention as much as the case of Hwang. The problem we face is not just how to minimize the occurrence of such cases, nor is it just about the biomedical sciences and human health. The more fundamental problem is the need to define more clearly what constitutes responsible conduct in all areas of academic inquiry.

Standards of conduct should include much more than just avoiding behavior that is clearly illegal. During the past 15 years, numerous studies have provided evidence that on the order of one-third of scientists struggle with recognizing and adhering to accepted standards of conduct. This does not mean that large numbers of scientists are knowingly engaging in research misconduct, but it is reasonable to conclude that many lack the tools, resources, and awareness of standards that would serve to sustain the highest integrity of research. The pursuit of knowledge is a noble end, but we scientists owe more to the public and to ourselves than to ignore the ethical foundations of what we do. If we expect our colleagues to act responsibly, then we must provide them with the knowledge and support they need.

In academia, we recognize that the remedy for gaps in knowledge and skills is education and training. Because the purpose of science is to have an impact on the human condition, the conduct of science is defined by ethical questions. What should be studied, what are the accepted standards for the conduct of research, and what can be done to promote the truthful and accurate reporting of research? The answers to these questions are not normally found in a K12 education or in college. Based on surveys of researchers, these questions are only rarely being answered through research training. Something more is required. Institutions of higher education are the logical places to fill this gap.

In the area of research ethics, scientists have obligations to the public that grants them the privilege to conduct research, to private and public funders who expect that research will be conducted with integrity, to the scientific record, and to the young people they train. These are not mere regulatory obligations; they are also the right thing to do. That said, these obligations are addressed in part by a National Institutes of Health (NIH) requirement, now in place for 15 years, that those supported by NIH training grants should receive training in the responsible conduct of research (RCR). The domain of RCR training includes not only the ethical dimensions of research with human subjects, but every dimension of responsible conduct in the planning, performance, analysis, and reporting of research. This RCR requirement stimulated the creation of educational materials and resources and encouraged the participation of research faculty in the teaching of RCR courses.

Such a requirement is appropriate and important, but limiting the required training to the select few that receive NIH funding unintentionally sends the wrong message. Under these circumstances, it is not unexpected for faculty and trainees to assume that RCR training is just one more bureaucratic hurdle rather than something that has real value. The way to remedy this perception is to implement training programs that engage all researchers.

Expanding RCR training to all will not be easy. In December 2000, the Office of Research Integrity (ORI) and the Public Health Service (PHS) announced that all researchers supported by PHS grants would be required to receive RCR training. Many in the academic community were justifiably unhappy that the policy was a highly prescriptive and unfunded “one size fits all” mandate. The requirement was suspended in February of 2001, just two months after its announcement. The ORI’s decision to suspend the requirement was precipitated by concerns that it had not been developed through appropriate rulemaking procedures. Whatever the shortcomings of that effort, the need for RCR training for all researchers still exists.

Before the requirement was suspended, an RCR education summit was convened by multiple federal agencies. The goal of the summit was to address the roles of the federal government and federally funded research institutions in meeting a common interest in effective RCR education for all scientists. In that meeting, Jeffrey Cohen, who was then director for education at the Office for Human Research Protections, clearly articulated the apparent dilemma. On the one hand, a federal requirement for RCR education could readily result in a prescriptive and inflexible program that would not be effective. On the other hand, in the absence of a federal mandate, research institutions had only rarely created programs to promote RCR.

The good news is that the initial announcement of a requirement stimulated many institutions to begin developing programs for RCR training. Unfortunately, once the requirement was suspended, efforts to enhance RCR education slipped down the list of priorities. The U.S. experience appears to be that although research institutions talk about the importance of ethics, most are funding little more than what is required for compliance. Today, the challenge for the research community is to promote RCR education in the absence of a regulatory mandate.

Continuing with the status quo is not good enough. Or more precisely, funding only the minimum required to comply with external regulations is inadequate. However, although an increased focus on ethics is an admirable goal, resources are scarce. If we hope to do more to promote ethics, then the inevitable question is what will it cost? We could begin by a prescriptive listing of what must be done and then ask how much those programs would cost. However, general implementation of this approach is impractical if only because circumstances in each institution vary so greatly.

A better formula would be to make ethics support commensurate with the size of the research program. A similar approach was carried out with the allocation of 3% of the Human Genome Project research budget to study its ethical, legal, and social implications. Given the necessary resources, each institution could then implement the kinds of programs most appropriate to its culture and needs. Unfortunately, it is unlikely that today’s research institutions can realistically consider a 3% allocation in the face of declining research budgets. So if not 3%, how much?

In health care policy, a “decent minimum” is often discussed as a standard for judging what should be in place for everyone. Given the need for an increased focus on the ethical dimensions of research, it is reasonable to ask what would be a decent minimum above what is currently allocated for compliance. Using the principles that funding should be proportional to the research budget and that formal programs are critical for addressing the ethical dimensions of research, I propose that we begin with a requirement of spending just 0.1% of an institution’s direct research funding for RCR education.

What could be done with such a modest allocation for research ethics? Intermediate and large research institutions would have dedicated resources to create and carry out a variety of programs to train researchers, to raise awareness of ethical issues and resources, and to engage the public in a shared examination of the ethical and scientific foundations for ongoing and proposed research. Smaller institutions could use their more limited resources to develop partnerships with other institutions and to attend train-the-trainer programs rather than develop programs de novo. In addition, smaller institutions could obtain help with program creation through organizations such as the Association for Practical and Professional Ethics (http://www.indiana.edu/~appe), the Collaborative IRB Training Initiative (https://www.citiprogram.org), the Responsible Conduct of Research Education Consortium (http://rcrec.org), and the Society of Research Administrators International (http://www.srainternational.org).

This year marks the fifth anniversary of the suspension of the PHS requirement for RCR training for the researchers it funds. Rather than continuing to wait for federal action, the research community should take the high ground and exhibit the necessary leadership to ensure that ethics is an integral part of science. The cost of 0.1% is low, and the potential for gain is high. Experience will determine whether the amount is adequate, but it should be possible to win wide agreement that it is a good starting point for a decent minimum.

From the Hill – Fall 2006

Bush vetoes stem cell research bill

Within 24 hours of a Senate vote of 63 to 37 to approve the Stem Cell Research Enhancement Act, President Bush issued the first veto of his presidency, closing a chapter in a long and complex debate over the use of federal funds for human embryonic stem cell research. A vote to overturn the veto failed in the House.

The bill (H.R. 810), which was approved 238 to194 by the House in May 2005, would have loosened restrictions set by the president in August 2001, when he allowed federal funding of research only on stem cell lines derived from embryos by that date. Proponents of H.R. 810 argued that many of those original cell lines are unsuitable for research and that the original number expected to be available were overestimated. The Stem Cell Research Enhancement Act would have allowed the government to fund research on cell lines created after August 2001 that met specific ethical standards. Only those cell lines derived from embryos left over from fertility treatments and donated with the consent of the progenitors and without financial incentives would be allowed.

Human embryonic stem cells are derived from several-day-old embryos and can theoretically differentiate into virtually any type of human cell, from blood cells to skin cells. Proponents of federal support for embryonic stem cell research argue that excess embryos left over from in vitro fertilization and are slated to be destroyed could be donated for research. Opponents, however, argue that such research would still condone the destruction of a human embryo and that federal dollars should not be used.

Despite the president’s veto, congressional support for reducing restrictions on federal funding of stem cell research is growing. The Senate vote included 43 Democrats, 1 Independent, and 19 Republicans. More members of the House voted to overturn the veto than had voted for the bill in 2005.

After the presidential veto, Senate Majority Leader Bill Frist (R-TN), who stunned the research community in 2005 by announcing his support for the House bill,stated,“I am pro-life,but I disagree with the president’s decision to veto the Stem Cell Research Enhancement Act. Given the potential of this research and the limitations of the existing lines eligible for federally funded research, I think additional lines should be made available.”

House challenge to climate change research fizzles

Continuing challenges to studies by climate scientist Michael Mann and colleagues by climate change skeptics in the House, led by Rep. Joe Barton (RTX), chairman of the House Energy and Commerce Committee, appear to have fizzled after the release of a June 22 National Research Council (NRC) report that supported Mann’s conclusions. Barton, however, has vowed to keep his committee actively involved in the climate change debate and has requested two new studies on research practices in the field.

Despite the NRC report, which concluded that Mann’s statistical procedures, although not optimal, did not unduly distort his conclusions, the Energy and Commerce Committee’s Subcommittee on Oversight and Investigations held two hearings in July, each lasting more than four hours. They focused on the statistical methods used in the 1998 and 1999 studies by Mann, Raymond Bradley, and Malcolm Hughes. Barton argued that the use of the studies in the 2001 Intergovernmental Panel on Climate Change report justified a detailed examination of the methods involved. “A lot of people basically used that report to come to the conclusion that global warming was a fact,” he said.

Mann, Bradley, and Hughes reconstructed temperatures of the past 1,000 years. Because direct temperature measurements date back only 150 years, the researchers used proxy measurements, including tree ring growth, coral reef growth, and ice core samples. They produced a graph that looked like a hockey stick: a long period of relatively stable temperatures, then a dramatic spike upward in recent decades. Critics of the research, however, argued that the hockey stick shape could simply be the artifact of incorrect statistical techniques, and climate change skeptics seized on the graph as a proxy for everything they believe is wrong about climate change research.

During the July 19 hearing, Edward Wegman, a George Mason University statistician, testified on behalf of the mathematicians who reviewed the Mann papers at the request of Rep. Barton. He stated, “The controversy of the [Mann] methods lies in that the proxies are incorrectly centered on the mean of the period 1902-1995, rather than on the whole time period.” He explained that these statistical procedures were capable of incorrectly creating a hockey stick shaped graph.

Gerald North, chair of the NRC committee, testified at the hearing that he agreed with Wegman’s statistical criticisms, but said that those considerations did not alter the substance of Mann’s findings.North said that largescale surface temperature reconstructions “are only one of multiple lines of evidence supporting the conclusion that climatic warming is occurring in response to human activities.”

At a July 27 hearing of the committee, Mann, referring to his statistical techniques, said that “knowing what I know today, a decade later, I would not do the same.” But he noted that multiple studies by other scientists have reached similar conclusions: Temperatures have been far higher in recent decades.

The July hearings were only the latest round of attacks on Mann’s research. In July 2005, Barton and Subcommittee on Oversight and Investigations Chairman Ed Whitfield (R-KY) solicited not just the climate papers in question, but large volumes of material from Mann and his coauthors, including every paper they had ever published, all baseline data, and funding sources. These requests were fiercely resisted by the scientific research community. Perhaps the harshest rebukes came from House Science Committee Chairman Sherwood Boehlert (R-NY), who called the investigation “misguided and illegitimate.”

Democrats on the Energy and Commerce Committee expressed frustration at the exclusive focus of the July hearings on just the two papers. They stressed that the scientific consensus on human-induced climate change would remain unaltered even if Mann had never written the papers in question. Rep. Bart Stupak (D-MI) said he was “stupefied” at the narrow scope of the hearings and said that Congress was “particularly ill-suited to decide scientific debates.” Rep. Jay Inslee (DWA) called the hearing “an exercise in doubt.”

Barton said at the July 27 hearing that he had requested a study from the Government Accountability Office on federal data-sharing practices, particularly in climate science research, and that he planned to request a study from the National Research Council’s Division on Engineering and Physical Sciences on how to involve more disciplines in climate change research.

Bills target attacks by animal rights activists

Bills have been introduced in the House and Senate to address the growing issue of attacks, particularly on laboratories, by extremist animal rights groups, which Rep.Howard Coble (R-NC) said are having a “chilling effect”on research.

The House Judiciary Subcommittee on Crime, Terrorism and Homeland Security held a hearing in May on H.R. 4239, the Animal Enterprise Terrorism Act (AETA), sponsored by Rep. Thomas Petri (R-WI). The bill would make it a crime to harass, threaten, or intimidate individuals or their immediate family members whose work is are related to an animal enterprise (including academic institutions and companies that conduct research or testing with animals). It would also make it a crime to cause economic disruption to an animal enterprise or those who do business with animal enterprises, an intimidation technique called tertiary targeting. The bill adds penalties and allows victims to seek restitution for economic disruption, including the reasonable cost of repeating any experiment that was interrupted or invalidated as a result of the offense.

Chairman Coble, a cosponsor of the legislation, outlined the key issue of the hearing: the need to balance law enforcement regarding crimes and the protection of First Amendment rights. He announced that an amendment will be introduced at markup that will serve to ensure that the bill doesn’t prohibit constitutionally protected activities, even though he believes that the bill already contains such language.

Michele Basso, an assistant professor of physiology at the University of Wisconsin, Madison, testified about harassment she has received as a result of her research with primates. She said that animal rights activists protest regularly at her home, have signed her up for subscriptions to 50 magazines, and made numerous threatening phone calls. She said that university officials do not provide sufficient security and that she and some colleagues have thought about leaving the field and pursuing other research. She said that some colleagues in the United Kingdom are spending so much time on security measures that their research is suffering.

William Trundley, director and vice president of corporate security and investigations at GlaxoSmithKline, said that his company has been under attack in both the United States and United Kingdom. He said that many employees, whom he described as “traumatized,” have had their property vandalized, and researchers’ families have been harassed.

The Animal Enterprise Protection Act (AEPA) of 1992 protects animal enterprises against physical disruption or damage, but says nothing about tertiary targeting of people or institutions that conduct business with an animal enterprise. Brent McIntosh, deputy assistant attorney general, testified that AEPA is not sufficient to address the more sophisticated tactics used by animal rights extremists today. “The bill under consideration by the subcommittee would fill the gaps in the current law and enable federal law enforcement to investigate and prosecute these violent felonies,” he said.

Rep. William Delahunt (D-MA) argued that these activities are already well covered by local and state laws and should be prosecuted at that level. However, both McIntosh and Basso testified that local law enforcement authorities see these activities as minor crimes (spray painting, trespass, etc.) and generally have little inclination to pursue the perpetrators. Further, those who commit these crimes often receive at most minimal fines or short jail sentences. Rep. Bobby Scott (D-VA), a cosponsor of AETA, concurred with the witnesses that local laws cannot address the national scope of this activity.

A companion bill was introduced by Sen. James Inhofe (R-OK) in October but has not advanced out of committee. Committee staff said they are hopeful that the bills will advance in the fall of 2006.

Bills to boost competitiveness advance

In a vote demonstrating a bipartisan commitment to boosting U.S. economic competitiveness, the House Science Committee on June 7 approved the Science and Mathematics Education for Competitiveness Act (H.R. 5358) and the Early Career Research Act (H.R. 5356). The bills were originally introduced with only Republican sponsorship, but enough changes were made to bring all Democrats onboard.

Although Committee Chair Rep. Sherwood Boehlert (R-NY) characterized the bills as complementing President Bush’s American Competitiveness Initiative (ACI), White House science advisor John Marburger sent a letter to Boehlert stating that the bills contain “very high authorizations” of spending and would diminish the impact of the ACI.

The bills strengthen existing programs at the National Science Foundation (NSF) and Department of Energy’s (DOE’s) Office of Science. The Science and Mathematics Education for Competitiveness Act would expand NSF math, science, and engineering education programs, including the

  • Robert Noyce Teacher Scholarship Program, which provides scholarships to math and science majors in return for a commitment to teaching. The bill includes more specifics on the programs that grant recipients must provide for students to prepare them for teaching, including providing field teaching experience. It also allows those programs to serve students during all four years of college, although scholarships would still be available only to juniors and seniors, and raises the authorization levels for fiscal years 2010 and 2011. NSF will be required to gather information on whether students who receive the scholarships continue teaching after their service requirements are completed.
  • Math and Science Partnership Program, which would be renamed the School and University Partnerships for Math and Science. In addition to teacher training, the bill would allow grants for other activities, including developing master’s degree programs for science and math teachers.
  • Science, Technology, Engineering, and Mathematics Talent Expansion Program (STEP), which provides grants to colleges and universities to improve undergraduate science, math, and engineering programs. The bill would allow the creation of centers on undergraduate education.

The legislation also requires NSF to assess its programs in ways that allow them to be compared with education programs run by other federal agencies.

The Early Career Research Act was amended to include provisions from H.R. 5357, the Research for Competitiveness Act, and passed unanimously. The bill authorizes programs at NSF and DOE’s Office of Science to provide grants to early-career researchers to conduct high-risk, high-return research. The bill also expands an NSF program that helps universities acquire high-tech equipment that is shared by researchers and students from various fields.

The amended bill also includes several provisions concerning the National Aeronautics and Space Administration (NASA). A new section expresses the sense of the Congress that NASA should participate in competitiveness initiatives within the spending levels authorized in the NASA Authorization Act of 2005 and allows NASA to establish a virtual academy to train its employees.

Committee backs funding for new energy technologies

The House Science committee voted on June 27 to approve the Energy Research, Development, Demonstration, and Commercial Application Act of 2006 (H.R. 5656), which authorizes $4.7 billion over six years for the development and promotion of new energy-related technologies. The committee, however, did not approve the creation of a new agency within DOE to accelerate research on targeted energy technologies.

The bill brings together multiple bills, including one that was introduced by Energy Subcommittee Chairman Judy Biggert (R-IL), to authorize and specify the implementation of the president’s AEI. The AEI provides for a 22% increase in clean-energy research at DOE.

Biggert noted the difficulties involved in altering U.S. energy-use practices and regulations. “To make significant progress down this path requires a steadfast commitment from Congress and the federal government to support the development of advanced energy technologies and alternative fuels that will help end our addiction to oil and gasoline,” she said. “The bill we are considering today includes provisions that do just that, by building on the excellent research and development provisions this committee included in the Energy Policy Act of 2005.”

The legislation funds research in photovoltaic technologies, wind energy, hydrogen storage, and plug-in hybrid electric vehicles. Grant money was made available for the design and construction of energy-efficient buildings, as well as for further educational opportunities for engineers and architects related to high-performance buildings. The bill also gives what Committee Chairman Rep. Sherwood Boehlert (RNY) called an “amber light” to the Global Nuclear Energy Partnership (GNEP), financing the program but requiring further analysis before large-scale demonstration projects can proceed. Under the GNEP, the United States would work with other countries to develop and deploy advanced reactors and new methods to recycle spent nuclear fuel, which would reduce waste and eliminate many of the nuclear byproducts that could be used to make weapons. Further support was given to FutureGen, which is aimed at developing an emissions-free coal plant with the capacity for carbon capture and sequestration.

The committee decided to seek further input from the National Academies on a proposal to create an Advanced Research Projects Agency for Energy (ARPA-E), patterned after the successful Department of Defense DARPA (Defense Advanced Research Projects Agency) program. The National Academies recommended creating an ARPAE in its 2005 report Rising Above the Gathering Storm. Biggert was concerned whether this “new bureaucracy” would really help, and Boehlert worried that “a lot of unanswered questions” remained on the details of ARPA-E and expressed concerns about its funding. Rep. Bart Gordon (D-TN), who introduced legislation in December 2005 to create an ARPA-E, maintained that the language was sufficiently clear and that the provision already represented a finished product from the National Academies when he introduced an amendment to establish ARPA-E within DOE. His amendment was defeated, and the committee kept language instructing the National Academies to create a panel to further study and make recommendations on the ARPA-E concept. The Senate Energy and Natural Resources Committee supported the concept of ARPA-E when it passed S. 2197 in April 2006.

Energy research was also supported in H.R. 4761, the Deep Ocean Energy Resource Act, passed by the House on June 29. The bill authorizes two new DOE research and education programs at a combined total of $37.5 million a year for each of the next 10 years. The new programs would provide funding for grants to colleges and universities for “research on advanced energy technologies.” Specifically, the grants could be used for research on energy efficiency, renewable energy, nuclear energy, and hydrogen. The new programs would provide graduate traineeships at universities and colleges for research work in those same areas.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Shield and the Cloak

Imagine the 21st century as a three-dimensional chess game. One dimension represents the United States. One dimension represents the world of nation-states. The third dimension—a new one— represents stateless nations.

In the 20th century, national security was mostly two-dimensional. The United States and its democratic allies, the white pieces, faced off against other nation-states (imperialist, fascist, or communist), the black pieces. The democratic nations, the white pieces, prevailed because our pieces together were more powerful and, in most cases, we moved them more cleverly. In the 20th century, security was achieved by the clever positioning of powerful forces according to the rules of the traditional two-dimensional game.

As of 9/11, a new third dimension, nonstate actors, imposed itself on the security chess board. Nonstate actors do not play with the same figures or pieces. No knights in uniform. No rooks sheltering stable national wealth. No kings and queens enthroned in national capitals. They also will not participate on the old two-dimensional chess board. Most of all, they refuse to play by the rules. Thus, security cannot be achieved in this new century by using the same pieces and playing by the old rules.

Security can be won only by creating imaginative new pieces, deploying and maneuvering them much more creatively and swiftly, and consolidating the forces of the traditional two dimensions into a global commons: a figurative arena in which collective security interests are deployed for the common good. The United States must also be willing to welcome new players (for example, by engaging China as it did in containing the North Korean nuclear threat) and to use its collective genius and wisdom to create new security rules for this new multilayered global chess game.

The knights, or military forces, must look different, like the Delta Forces in Afghanistan, and be trained and equipped differently. Wealth must be brought out of its protective national castles and invested more wisely in mastering new sciences and technologies to reduce threats of climate change and pandemics. The kings and queens, political figures out of touch with 21st-century realities, must be replaced by leaders smart enough to fully understand the new dimension and bold enough to define new rules for the new game. It also would not hurt if the bishops, the religious leaders, played a more enlightened and constructive role.

The new security will be national and international, defensive and offensive. It will require a shield and spear, representing new kinds of military forces, as well as a cloak that protects the global commons from nonmilitary threats. The old security required containing the Soviet Union within its borders. The new security requires a shield protecting the homeland from terrorists’ threat and a spear to pin the terrorists in their caves. The old security required cooperation among Western armies. The new security requires cooperation among intelligence services. The old security required massive weapons in massive numbers. The new security requires special forces of individual warrior teams searching for terrorists in tunnels and caves. The old security required economic dominance. The new security requires economic integration in a world of international markets, trade, and finance. The old security meant prevention of nuclear war. In addition to that goal, the new security is a cloak composed of security of livelihood, security of energy, and security of the environment.

Consider the collection of new developments, almost all of them neutral on any security scale, that together create huge insecurities. Technology itself is at the top of the list. Technology as applied to destruction is producing increasing numbers of nuclear, chemical, and biological weapons capable of mass casualties and mass destruction of property.

Nations have yet to use biology in the form of viral plagues or other maladies against each other, although testing and experimentation with such agents have been known to take place. In an age of suicidal terrorism, it is certainly quite easy to conceive of any number of attackers willingly infecting themselves with a highly toxic, highly contagious virus and fanning out through subway systems, sports events, and shopping malls in the United States to create epidemics. And Hiroshima and Nagasaki tell us all we need to know about nuclear destruction in cities.

Technology is also miniaturizing and privatizing the manufacture of weapons of mass destruction. Until recently the province of nation-states, the production of such weapons by nonstate actors in small laboratories, particularly in the case of biological weapons, is rapidly becoming more feasible.

In some ways, weapons of mass destruction represent dual threats: from their use and from the technology democratizing their production and ownership. This simply means that when the genie of mass destruction escapes the lamp, it cannot be put back in by nation-states negotiating treaties to do so. The political equation based on the state monopoly on violence is being fundamentally and perhaps permanently altered by technology.

Other new realities are, or soon will be, threats to security. Mass migrations from Africa to Europe and from Latin America to the United States are fundamentally changing cultures and societies. Europe’s Muslim population is exploding from such migrations and from its own huge birth rates. By 2020, one-quarter of all U.S. citizens will be Hispanic. Neither is, by itself, a bad development, but each will have consequences that must be understood. A society that receives a large and rapid inflow of people may or may not retain its historic values, beliefs, customs, and cultures. Probably not. And as this trend accelerates, the demographic, social, and political changes will create a sense of insecurity among those who find comfort in their traditional cultures.

Even though the threat of AIDS seemed to have been contained for the moment, albeit at a very high level, in the United States and most of Europe, it continues to decimate the populations of many Asian, African, and Latin countries. In addition, almost as many people, particularly children, are dying of malaria in these same countries. Though these may seem distant threats to an uncontaminated American, they destabilize nations and whole economies, and create almost unbearable mountains of human misery. And they contribute to state failure. Into such voids flow religious fundamentalism, clans led by warlords, mafias seeking control of vital resources, and terrorist organizations offering identity and at least limited security to stateless, rootless, hopeless people.

Not all biological danger is created or spread by humans. Virologists are concerned that highly contagious pathogens are capable of outrunning our efforts to contain them. Almost a century ago, an influenza epidemic killed roughly 50 million people worldwide before burning itself out. In the minds of some, the Asian bird flu represents at least the same potential. Commenting on the mounting threat from the bird flu virus, a World Health Organization official said, “We at WHO believe that the world is now in the gravest possible danger of a pandemic,” a global pandemic that could kill millions. Needless to say, modern transportation will hasten the spread of any disease.

After the end of the Cold War in 1991, and particularly since September 11, 2001, the United States has more often than not taken its sole-superpower status to mean that the world has no choice but to follow it; that it is the U.S. way or the highway. The facts suggest that this attitude is swiftly becoming illusory. The European Union is consolidating its political and economic power and is beginning to discuss a collective defense strategy, with its own rapid deployment capability, separate from that of U.S.-led NATO. Led by China, Japan, and South Korea, East Asia is forming the largest trading bloc in the world, without U.S. participation or even U.S. consultation. And the U.S. domination of space for military and communications purposes is being challenged by Europe in cooperation with China.

The more the United States goes it alone, with the expectation that the rest of the world has no choice but to follow, the more the rest of the world is beginning to prove otherwise. Instead of ignoring the aspirations of other nations and collections of nations, the United States should encourage them. Otherwise, it will soon find itself in the unenviable position of being the world’s cop, troubleshooter, shield, and target, while other nations collectively pursue the cloak of better and more productive lives.

To explore new methods of threat reduction—“drying up the terrorist swamp” is one colorful metaphor—requires international alliances and a U.S. example. The first has a better chance of achievement if accompanied by the second. If globalization is made inclusive and expanded to developing and undeveloped nations, it can be a great opportunity to replace hopelessness with hope. Likewise, if access to information technologies held by advanced societies is shared, it will narrow the gap between advancing nations and the rest of the world and can revolutionize lagging national economies.

In many ways, success in achieving security in the early 21st century will be measured by the imagination shown by the United States and nations of good will in inventing opportunities to convert global revolutions into threat-reduction policies for the commons. Information technologies, such as low-cost wireless communications, can transform even the most rural economies and help markets to develop. In the 1990s, I helped a major U.S. telecommunications company to develop telecommunications projects in Eastern European and post-Soviet markets and helped to overcome political hurdles in order to pioneer in these regions. The transformative impact of modern communications was demonstrated in Hungry, Czechoslovakia, Poland, and other Eastern European Soviet satellites in the late 1980s and early 1990s, when advanced Western communications systems, quickly installed, revolutionized stagnant economies and created vital urban and rural markets.

There is a direct correlation between a nation’s willingness to open its doors to other nations and the degree to which it is seen as a threat to others. Every nation has secrets even from its closest allies and friends. There are very large parts of the United States that are not only inaccessible to friendly foreigners but also to U.S. citizens.

The security of the commons in the future will be achieved in direct proportion to humanity’s ingenuity in reducing the causes of insecurity. It is possible to use technology, modern science, the communications revolution, and globalization and trade to improve the lives of billions. It is possible to stabilize fragile states and improve economies, thus reducing the causes of mass migration. It is possible, at least for a few years to come, to reverse dangerous climate change. It is possible to control epidemics and attack new and old diseases. It is possible to bring the vast majority of the global population who are committed to good will closer together and further isolate and suppress radical fundamentalists, suicidal zealots, and forces of destruction and death. It is possible to dramatically reduce the proliferation of destructive technologies. These and many other historic achievements, some not conceivable before, are all now possible.

The hard part is not in knowing what must be done and how to do it: The hard part is generating the political will to do what must be done.

Using principles such as civic membership as a foundation, the pieces of a new security structure begin to appear. The U.S. Commission on National Security chose early in its deliberations to define security more broadly—to include cloak with shield—than in the narrow military sense inherited from the Cold War. Our equal numbers of progressive and conservative members understood that a mighty army and a weak government, or better weapons and worse schools, or greater firepower and a rejection of public service made no sense. So our reports urged a major increase in education investment, particularly in the sciences and mathematics, as necessary to a prosperous information-age economy and as the basis for a strong and secure nation.

After the creation of a new national homeland security agency, recapitalizing U.S. strengths in science and education was the next-highest priority. “The scale and nature of the ongoing revolution in science and technology, and what this implies for the quality of human capital in the 21st century, pose critical national security challenges for the United States,”we advised the new Bush administration.“Second only to a weapon of mass destruction detonating in an American city, we can think of nothing more dangerous than a failure to manage properly science, technology, and education for the common good over the next quarter century.”

Strong words, but carefully chosen. We found that the U.S. need for the highest-quality human capital in science, mathematics, and engineering is not being met. And we argued that this is not merely an issue of national pride or international image; it is an issue of fundamental importance to national security. Despite our calls on the president and Congress to double the U.S. government’s investment in science and technology by 2010, five years later we have not even begun. We found that 34% of public-school math teachers and almost 40% of science teachers lack even an academic minor in their primary teaching fields. We proposed an additional 240,000 teachers of science and math in elementary and high schools. It has not been done. We urged more scholarships for science and engineering students. Five years later, we have yet to make a start. We proposed detailed plans for scholarships and low-interest education loans, forgiveness of student debts for those entering military or government service, a national-security teaching program to train very large numbers of new teachers, and the financing of professional development and lifelong learning—all in the national security interest. None of this has been done.

We linked education to economic prosperity, economic prosperity to national security, and national security strength to world leadership. The divergence between stark new realities and the nation’s lack of response to them illustrates the central point of this argument: Either U.S. leaders come to understand the new dimensions of security or the nation is doomed to insecurity and eventual decline. Beyond doubt, China’s inevitable challenge to U.S. economic and therefore political leadership will be seen by those who today refuse to take the steps necessary to guarantee our vitality as Chinese aggression rather than U.S. lassitude.

In the 21st century, the engines of economic growth will be science and technology. The United States is not producing enough scientists, mathematicians, physicists, or engineers or those qualified to teach in these fields at the K-12, university, or graduate school levels. Much of the research that created the basis for U.S. security in the Cold War era was produced in the national laboratory system. Since the end of the Cold War, that system has been in decline. Other nations in Europe and Asia, especially the Chinese, are increasing their investments in all fields of science and in scientific and technological research.

In January 2001, the commission recommended doubling the U.S. government’s R&D budget by 2010 and instituting a more competitive environment for the allocation of those funds. It also recommended elevating the responsibilities of the president’s science advisor, resuscitating the national laboratory system, and passing a new national-security science and technology education act to produce a dramatic increase in the number of science and engineering professionals and qualified teachers in science and math. Much of this thinking mirrors the transformation in science and technology brought on by the dramatically increased U.S. investment stimulated by the Soviet launch of the Sputnik satellite in the 1950s. None of these things has been done or has even been begun.

Rather than invade Middle Eastern countries whose possession of weapons of mass destruction or whose threat to the United States is at best dubious, we should encourage the nation in directions that will strengthen it by stimulating economic growth and reward it for pursuing the objectives required to keep our nation on science’s and technology’s cutting edges. Despite its great power, the United States neither could nor should prevent other nations, including China and Russia, from succeeding and growing. Indeed, there is no stronger deterrent to war than a ringing cash register. And increased economic interdependence reduces the likelihood of conflict. In the 21st century, neither isolation nor empire is an option for the United States.

Somewhere among these ideas—the republican ideal of civic virtue, the sense of commonwealth and the common good, and a U.S. civic nationalism that is internationalist— rest the secrets to achieving security’s shield and cloak.

When we restore the idea of the commons, the sense that security is both a shared obligation and a shared right, we will emerge from our individual heavily fortified homes and castles into that commons and defy any threat, terrorist or otherwise, to defeat us. Together we will be strong, we will be unbeatable, we will possess security’s cloak and its shield. For this is security’s web.

From Energy Wish Lists to Technological Realities

The aspiration for new technology has been at the heart of every energy policy developed since the first oil embargo in 1973. President Bush’s 2006 State of the Union address continued the quest for new technology by proposing the Advanced Energy Initiative, which once again calls for “greater use of technologies that reduce oil use” and “generating more electricity from clean coal, advanced nuclear power, and renewable resources.” However, history shows that the hard problem for energy policy is not how to craft another technological wish list but how to turn technological aspirations into reality. And the government has been slow to use its own experience to learn how to solve this problem.

The public policy goal for energy technology can be expressed simply: to induce technological innovations in the private sector that serve national energy policy. Stated thus, this goal embodies three fundamental principles about how the innovation process works that are grounded in long experience with federal energy R&D. Because these principles differ in some important respects from conventional wisdom, understanding them is the place to begin a discussion of how to achieve the goal.

First, innovation in energy technology happens almost entirely in the private sector. The process of bringing a new product to market involves the most intimate of relationships between buyer and seller. Both are entering uncharted waters, and the balancing of risks among the parties is often a delicate compromise. The Department of Energy (DOE), or any other government bureaucracy for that matter, is too a clumsy partner to enter into this relationship in a meaningful way.

BECAUSE TECHNOLOGICAL INNOVATION IS PRIMARILY A PRIVATE SECTOR ACTIVITY, INDUCING THE PRIVATE SECTOR TO DO MOST OF THE WORK OF DEVELOPING NEW ENERGY TECHNOLOGY IS AN ALMOST SELF-EVIDENT STRATEGY.

A 2001 report by the National Research Council (NRC) underscores this principle. The report examined the track record of DOE research in the areas of energy efficiency and fossil energy between 1978 and 2000. As part of its work, it asked several experts to name the most important technological innovations that actually entered the energy system during this period, without regard to their source. Then they were asked how important government-funded research was to these technologies. Of the 23 technologies that were listed, in only 3 cases did the government program make a major contribution, and in 7 DOE’s research was moderately helpful. The government’s role was minimal in the other 13.

Even where DOE played a major role, the private sector carried much of the load. In one case, DOE developed diamond bits for drilling into hot dry rock as part of its geothermal energy program. That program did not flourish, but the drill bit technology found a home in the oil and gas industry. In another, DOE invested a modest $3.2 million in the development of electronic ballasts for fluorescent lights, which enabled a small firm to introduce the product in the early 1980s. The entry of this new competitor motivated the two dominant lighting companies to adopt the technology. In the case of efficient refrigerators, DOE leveraged a $1.6 million research effort with a program to impose stricter efficiency standards on refrigerator performance, thus encouraging industry to adopt the technology developed by DOE.

Although it would be a mistake to read too much into these anecdotes, the NRC report nevertheless underscores the importance of the private sector in energy technology innovation and the varied and sometimes unintended paths by which government-sponsored research influences the private sector’s actions.

The second principle is that technological innovation is more than R&D. DOE and many other government agencies typically describe the process of developing new technology as Research, Development, Demonstration, and Deployment (RDD&D). This linear model may reasonably characterize the innovation process for technologies, such as weapons systems or space probes, for which the government is a customer. Innovation in energy technology, however, deals with the more complex problem of getting new products into the hands of private-sector buyers.

Rather than being the linear process characterized as RDD&D, the private sector innovation process is incremental, cumulative, and assimilative—in a word, messy. It typically proceeds in small steps because an incremental approach helps minimize risk to buyer and seller. However, the accumulation of such increments can ultimately add up to breakthrough technologies. Finally, the innovator often reaches out to diverse sources of knowledge and technology, assimilating them in novel ways for new markets. A Resources for the Future study of technology innovation in natural resource industries summarized the process this way: ”Even technologies subsequently recognized as revolutionary went through extended periods of adaptation and adoption. In many cases, additional technological developments were required to enhance applicability of an initial innovation. It has also been the case that one innovation does not achieve its full effectiveness until complementary albeit ostensibly unrelated technologies are developed.”

Finally, the reason for government to intervene in private-sector innovation is to remove obstacles to meeting national energy policy goals. The private sector can serve energy policy without help from government, as shown by the NRC report discussed earlier. But there are cases, often important ones, when national policy requires inducing the private sector to innovate in areas that would otherwise lie fallow. Knowing when and how to intervene is thus a crucial policy judgment.

To drive home this point, consider another conclusion of the NRC report. It calculated the economic, environmental, and security benefits produced by 39 applied research projects in DOE’s fossil energy and energy efficiency programs. Overall, the report estimated that DOE generated some $40 billion in economic benefits for the roughly $13 billion it spent on these programs between 1978 and 2000. (The report also identified environmental and security benefits that are harder to quantify.) But what is most interesting for policy is the highly skewed way in which this generally positive result was achieved. A handful of programs produced most of the benefit, whereas most of the investment resulted in very little:

  • A mere 0.1% of the expenditure accounted for three-quarters of the benefit. Three programs on refrigerator efficiency, electronic ballasts for fluorescent lighting, and low-emissivity windows created $30 billion in economic benefit for a total expenditure of $13 million.
  • Three-quarters of the expenditure—a little over $9 billion—produced no quantifiable economic benefit. Half of this money was applied to synthetic fuel projects that turned out to be at least a couple of decades premature. Developing synfuels technology may have been a reasonable goal at the time, but as will be discussed later, it could have been approached more modestly.

No one who has run an applied research program will be surprised by a few unexpected home runs or inevitable failures. But the DOE experience does suggest that there are lessons to be learned about how the government spends taxpayer money to influence technology innovation.

So how should government go about inducing technology innovations in the private sector that serve national energy policy? Four strategies seem especially important.

Provide private-sector incentives to pursue innovations that advance energy policy goals. Because technological innovation is primarily a private-sector activity, inducing the private sector to do most of the work of developing new energy technology is an almost self-evident strategy. The challenge for government policy is to find incentives that most effectively harness the innovative drive of the private sector.

The most effective incentive is to attach an economic value to the policy goal itself. For example, a carbon tax, or a cap on carbon emissions coupled with an allowance trading system, sets a price on carbon emissions. Similarly, requiring a floor price for oil that reflects its security and environmental risks would impart a value to reduced oil dependence. Responding to these market incentives, private-sector innovators will seek out the least-cost methods for achieving the policy goal. What makes this approach so effective is that it does not limit the innovative imagination. Thus, when a cap-and-trade system was established in the early 1990s for sulfur oxide emissions, the private sector responded by increasing the use of low-sulfur Western coal. This surprised many analysts who assumed that expensive technology to scrub sulfur oxides out of power plant exhaust gases would be the innovation of choice.

A second-best incentive is regulation designed to encourage either the introduction of new technologies or the improvement of existing ones. For example, many states now set renewable portfolio standards, which require electric utilities to generate a minimum amount of power from renewable energy sources. Another approach is to impose technology standards that require more efficient versions of familiar household appliances.

Technology standards have demonstrated their effectiveness in the case of the refrigerator standards described above and at least in the early years of the Corporate Average Fuel Economy (CAFE) standards. Nevertheless, regulation suffers from two disadvantages as compared to more broadly based market incentives. First, it tends to limit the scope of innovative activity because of the focus on specific technologies or applications. Second, regulation rather than energy policy becomes the driver of innovation, and they are not the same thing. The implementation of CAFE standards, which improved the fuel efficiency of passenger cars but promoted the market for light trucks, shows how unintended consequences can result from well-intentioned regulation.

Outright subsidies for the adoption of new technology, such as production tax credits for the adoption of solar and wind power, are the least effective form of incentive, because they address only one aspect of the problem of creating value in the marketplace. To succeed, a new technology must overcome the inertia of the market, which favors existing products, and must then rapidly become cost-competitive as production increases. No matter how good the new technology, the first few units will inevitably be more expensive than existing technology.

The value of a subsidy is that it enables the producer to sell early units at a competitive price. However, after that introductory period, the product must be able to compete on its own. To do so without the market or regulatory incentives discussed above, the cost of the new technology must drop very quickly as production increases, and this is a hard condition to meet. Arguably, the reason production why tax credits for wind power seem to have worked well is because the cost of wind technology has dropped rapidly thanks to relatively straightforward engineering and production improvements. The same cannot yet be said for solar photovoltaic cells, which require more research to become cost-competitive.

Conduct basic research to produce knowledge likely to be assimilated into the innovation process. This policy follows directly from the assimilative nature of innovation. The challenge is to design a basic research program that actually sets the table for innovators in a specific field of technology. Purely curiosity-driven research certainly produces useful knowledge, but it is not optimized for solving energy problems. On the other hand, research in support of applied technology, even if it is fundamental research, lacks the breadth to produce breakthrough ideas from unsuspected sources.

Two guidelines can help strike the right balance. One is to support ideas, even very exotic ones, that would, if successful, overcome fundamental weaknesses of known technology. For example, Marilyn Brown of Oak Ridge National Laboratory has examined how novel technologies might accelerate advances in energy efficiency. She suggests that technologies that manipulate materials at the nanoscale, apply molecular biology to energy problems, and draw on advanced computing capabilities could overcome the thermodynamic performance limits of existing energy systems.

The other guideline is to support principal investigators who are driven to apply their disciplinary knowledge to energy problems. If the innovation process is to benefit from assimilation, energy technology needs to be connected to diverse disciplinary sources of knowledge. This connection has to be a two-way street. Of course, energy technologists need to be looking for new ideas. But equally important is that scientists with new ideas need to be looking for energy applications.

The careers of Richard Smalley and Craig Venter exemplify the kind of bridges that need to be built from fundamental research in a variety of fields to long-term innovation in energy systems. Smalley, a Nobel Prize winner in chemistry, came to understand that his field of nanotechnology could greatly improve the efficiency of catalysts, solar cells, and other devices important for energy production and use. Venter, who led the private-sector group that mapped the human genome, is working to find and modify microbes that could be efficient biological sources of energy.

Target applied research toward removing specific obstacles to private-sector innovation. The fact that innovation is almost entirely a private-sector activity does not mean that it is always successful there. Economists who study innovation have identified several kinds of obstacles. One is the possibility that the innovator cannot capture enough of the benefit of innovation to justify the cost and risk of bringing a new product to market. For example, other firms may be able to copy the innovation so quickly that, despite the presumed protection of patents, the advantage of being first to market is greatly diluted. Or a firm may come up with a new product idea that is so far outside its regular business that it is unable or unwilling to bring it to market. Finally, the risk of innovation may be so great that financial markets are unwilling to risk the capital needed to proceed.

When these obstacles arise, government can often step in to put the private-sector innovation back on track. Its reason for doing so is that the innovation would produce a public good such as reduced carbon emissions that justifies intervention. Two of the home-run projects identified in the NRC study of DOE research were of this type. The technology for low-emissivity windows and electronic ballasts was important, but equally important was DOE’s ability to help have a product introduced, which in turn motivated the manufacturers of existing technology to adopt the innovations and thus advance energy policy goals.

THE MISSING LINK IN CURRENT ENERGY POLICY IS TO REWARD THE PRIVATE SECTOR DIRECTLY FOR MEETING ENERGY POLICY GOALS; THAT IS, TO PUT A PRICE ON CARBON PRODUCTION AND OIL CONSUMPTION.

Removing such obstacles should be a main goal of DOE’s applied research programs in nuclear, fossil, and renewable energy as well as in energy efficiency. Furthermore, it appears that this applied research is most cost-effective when it is targeted with some precision. The more precisely the private-sector obstacle is defined, the more surgically it can be removed. Doing so depends on developing a close partnership between government and the private sector.

The NRC study identified several examples of close cooperation that resulted in government research projects with solid benefit/cost ratios, even if they fell short of home-run success. For example, DOE (and before it, the Bureau of Mines) sponsored early-stage research to establish the extent of the coalbed methane resource and also pilot-tested some techniques for accessing it. Thereafter, the Gas Research Institute, an industry research organization, took the lead in developing coalbed methane. In another case, DOE organized a consortium of metal-casting companies to develop a more efficient casting technology that no one company could afford to risk on its own. The resulting technology became an industry standard.

Invest with care in technologies to serve markets that do not yet exist. The private sector innovates in the hope of creating value for future markets. Often this process, although driven entirely by private benefit, is enough to produce the innovations that energy policy desires. In some cases, however, development times are so long, the policy imperative is so profound, and the future market is so uncertain that government is justified in rushing innovation ahead of its natural pace. At the time of the first energy crisis, this idea lay behind the synthetic fuels program and DOE’s funding of coal liquefaction, coal gasification, and oil shale technologies. Today, the hydrogen economy, cellulose-based ethanol production, and zero-emission coal-fired electric power plants are the breakthrough technologies that DOE wants to accelerate in anticipation of a national commitment to reduce greenhouse gas emissions and to limit the use of oil in transportation.

Because government is not especially good at predicting when and how new markets such as these will emerge, care is required in investing in opportunities to serve them. As noted earlier, over half of the nonproductive $9 billion identified in the NRC study was spent on synthetic fuels projects. At the time it made sense to hedge against the possibility of skyrocketing oil prices, and buying some insurance with government funding was justified. But for government to aim to develop a process or product that is meant to be ready for commercial adoption, as was the goal of the synthetic fuels program, is to take too ambitious a step before the real market is more clearly in sight. Instead, attention should focus on research that would accelerate a class of technologies without presuming to pick specific winning products for undeveloped markets. A recent NRC study of DOE’s hydrogen program, for example, set forth clear research goals intended to move the hydrogen economy forward in this way.

Not surprisingly, looking at the federal government’s current energy science and technology programs though the prism of the foregoing guidelines reveals both good and bad news. By far the best news is the $500 million increase proposed in the 2007 budget for basic energy sciences and biological and environmental research at DOE. If this level of funding can be sustained or increased over time, there exists at least the potential of creating the foundation of knowledge from which future innovations in energy technology are most likely to emerge. Ensuring that these resources are targeted on a diversity of ideas and passionate researchers will be essential to this result. Because many such ideas and people are to be found in the nation’s great research universities and technology companies, DOE must reach beyond its own laboratory complex to realize the full value of this new funding. Encouragingly, the national laboratory share of the relevant science budgets seems to have drifted lower over the past five years, from around 65% of the total to about 60%.

Also in the good news column is that DOE’s applied research programs are getting more sophisticated in identifying and removing specific barriers to private-sector innovation. This observation is based on several studies of the DOE program conducted by the NRC. A review of DOE’s Industrial Technologies Program, for example, concluded that the program “has evolved over time into a well-managed and effective program….[It] significantly leverages its resources through a large and growing number of partnerships with industry, industry associations, and academic institutions.” Similarly, DOE’s FutureGen advanced electric power plant program and its research support for building the next generation of nuclear plants are being conducted in close cooperation with private-sector actors who are committed to innovation in the use of coal and nuclear power. Of course, DOE can probably do more to increase the ratio of energy policy benefits to government cost, but its program managers seem to be looking in the right places.

Unfortunately, however, the appropriations process tends to divert funding from the applied research strategies that are most likely to pay dividends. A major reason is that the targeted removal of obstacles to private-sector innovation is fairly dull work. It is tempting to embrace programs that will “solve” the energy problem by creating a hydrogen economy or a zero-emissions power plant or plug-in hybrid vehicles fueled with ethanol made from agricultural wastes. Although government-sponsored research is undoubtedly justified to position the private sector to move as the market prospects improve, it is unwise to invest in programs that have the unrealistic aim of developing products that would serve markets that do not yet exist. The risk, as noted earlier, is that such programs become too ambitious and so crowd out less glamorous but more beneficial research.

Far and away the most serious shortfall, however, is that current policy has its incentives priorities backwards. For example, the Energy Policy Act of 2005 creates $2.7 billion of production tax credits for renewable energy and an equally generous package of tax credits and loan guarantees for new nuclear and clean coal plants. Regulation gets some attention in the legislation, notably additional appliance standards, but the pace and degree of required efficiency improvement have yet to be specified. Similarly, although changes in CAFE standards are in the works, the reported targets are at best modest. And federal regulation seems less ambitious than actions being taken by a number of states.

But the missing link is to reward the private sector directly for meeting energy policy goals; that is, to put a price on carbon production and oil consumption. Although this is not the only policy that should be adopted, it enhances all the rest by focusing innovation on a specific outcome. By creating a market for public goods, this policy highlights the obstacles to innovation that government can help overcome, motivates basic researchers in a variety of disciplines to apply their knowledge to important problems, and greatly mitigates the danger of anticipating markets that may never materialize. As long as this policy tool stays on the shelf, the nation’s longstanding desire to use energy policy to stimulate technological innovation will remain unfulfilled.

Containing the fire

Now that American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer by Kai Bird and Martin J. Sherwin has won the National Book Critics Circle Award and the Pulitzer Prize, it is hardly necessary to say that this is a thoroughly researched, compellingly written, and extraordinarily perceptive book about one of the most gifted, successful, abused, and discussed scientists of the 20th century. And the story is about much more than what happened to Oppenheimer. It wrestles with the fundamental question of what role scientists should play in the making of public policy.

Other good books have been written about Oppenheimer, but this is the definitive story. It benefits from all that has been written previously, as well as from extensive personal interviews and the emergence of information that became available after the publication of earlier books. It touches on Oppenheimer’s early education at the Ethical Culture School in New York City, his psychologically troubled 20s, his involvement in progressive political activism in the 1930s, his brilliant achievement with the Manhattan Project, his evolution into a major policy figure in Washington after the war, and the ruthless attack on his character conducted by the Atomic Energy Commission. Throughout, the authors pay special attention to Oppenheimer’s view of social responsibility and ethical behavior.

Although it is never possible to get to the bottom of what makes a person tick, anyone who has considered Oppenheimer’s life knows that the task is particularly difficult for someone whose brilliance touched on psychology and literature as well as science. Nevertheless, Bird and Sherwin do not avoid the challenge. Rather than ducking behind easy labels such as “enigma” or “man of contradictions,” they make perceptive and probing attempts to understand the processes by which Oppenheimer made the key decisions in his life. Of course, not all of Oppenheimer’s decisions would be considered wise when he made them, and many that appeared wise turned out not to have the desired or expected effect. Bird and Sherwin try to elucidate the logic behind all of Oppenheimer’s major decisions and to distill the lessons from his insights and his errors.

Oppenheimer’s story is a seminal case history for science policy. Before World War II, scientists had virtually no role in national policy debates. But that changed quickly and dramatically with the war. Mathematicians who could break codes became a valuable asset, and the development of radar was a revolutionary development in war fighting. However, the pivotal event was the recognition that physicists were acquiring the knowledge necessary to develop a bomb of unimaginable power. The possession of such a weapon could change not only the course of the war but also the dynamics of foreign policy for the foreseeable future. Suddenly, the fate of the nation and the world hinged on the work of a band of lab rats obsessed with the functioning of particles too small to see. Even more remarkably, the military standing of the United States was in the hands of a fragile-looking, soft-voiced left-winger who liked to quote Buddhist scripture.

The stunning success of the Manhattan Project cast a new light on scientists. These were heady days in which many scientists suddenly found themselves transported from the peaceful halls of academe to the bustling corridors of power. Scientists, particularly the physicists, understood that their newfound power and influence brought with them new responsibility. Many saw the bomb as their bomb, and they wanted a voice in how it would be used. But most scientists were outsiders to policymaking, and their inclination was to try to influence government from the outside with petitions and other entreaties.

Oppenheimer had a different view. As director of the Manhattan Project, he had more direct access to the nationÕs leaders. He didnÕt have to nail his petitions to the door; he had the option of walking through the door and participating in the councils of government. Activist outsiders such as the Hungarian physicist Leo Szilard were skeptical of this strategy. They worried that one sacrificed his integrity when he entered the political world of compromise. Whether because he craved the power that came with being an insider or calculated that he would be more effective on the inside, Oppenheimer decided that he would influence government decisions as a participant rather than a protestor.

Oppenheimer understood that decisions about how to use the bomb would not be his alone, but he sacrificed his freedom to speak his mind publicly in order to participate in the decisionmaking process. Of course, he did not know what the president and his military advisers knew.He accepted their rationale for dropping the bomb on Hiroshima and Nagasaki, but he later came to understand that this was not militarily necessary. Still, he decided to work within the system.

Oppenheimer famously said about the bomb itself that he had blood on his hands. The same could be said about his willingness to be the insider. He agreed to share responsibility for collective political decisions and thus shared the responsibility for decisions with which he did not agree. His best opportunity to shape the future of nuclear politics occurred soon after the end of the war.

Neils Bohr had talked to Oppenheimer in 1944 about the need to institute international control of nuclear technology, and Isidor Rabi reinforced this view in talks with Oppenheimer in late 1945. In January 1946, Oppenheimer learned that the world’s leaders were discussing the creation of a United Nations (UN) Atomic Energy Commission. President Truman appointed a committee chaired by Dean Acheson to draft a proposal for international control of nuclear weapons. Oppenheimer was named to a board of consultants chaired by former Tennessee Valley Authority chairman David Lillienthal. Oppenheimer argued forcefully for international control of military and civilian nuclear technology, and he convinced the committee. The resulting Acheson-Lillienthal report, written in large part by Oppenheimer, recommended the establishment of an Atomic Development Authority with far-reaching power extending from uranium mines to power plants to laboratories. Many of Truman’s advisors were skeptical about the plan because they worried that the Soviet Union could not be trusted. Bernard Baruch, who was given the job of selling the plan to the UN, was not enthusiastic. Nevertheless, he asked Oppenheimer to serve as his scientific advisor. Unhappy with Baruch’s ideas about how the plan should be altered, Oppenheimer did not take the job. Baruch went on to revise the proposal in ways that made it completely unacceptable to the Soviets, and it was rejected. Some of Oppenheimer’s friends criticized his decision not to join Baruch, because they thought Oppenheimer might have been able to change Baruch’s views. We cannot know exactly how this experience affected Oppenheimer, but after this, he worked hard to be included as a participant in government councils. He became chairman of the Atomic Energy Commission’s (AEC’s) General Advisory Committee and continued his campaign, begun while still at Los Alamos, to block the development of the far more powerful fusion bomb and to argue for greater openness and international control of nuclear technology.

OPPENHEIMER DIDN’T HAVE TO NAIL HIS PETITIONS TO THE DOOR; HE HAD THE OPTION OF WALKING THROUGH THE DOOR AND PARTICIPATING IN THE COUNCILS OFGOVERNMENT.

Oppenheimer also learned that there are other risks in playing the political game. Lewis Strauss, his enemy on the AEC, became obsessed with his policy disagreements with Oppenheimer and devoted his considerable energy and cunning to destroying Oppenheimer. Strauss’s ruthless direction of the AEC hearing to review Oppenheimer’s security clearance led to Oppenheimer being stripped of his security clearance and expelled from the inner circle.

Some could derive from Oppenheimer’s fate the lesson that it is not wise to play with fire, but it would be a mistake to generalize from this one incident. Oppenheimer never said that he would have been better off remaining an outsider. He enjoyed his access to power, and he was right to recognize that one can have more leverage on the inside. There will always be people on the outside who can speak truth to power. Most of them will never have Oppenheimer’s option of entering the inner circle. But it is important for scientists to be part of that circle. They will not always win the day, and they should not always win the day. Government decisions must be guided by much more than scientific expertise.

When Oppenheimer learned in 1953 that he would have to face a hearing to determine, after 12 years of government service, whether he was a security risk, he had the option of simply resigning as a consultant to the AEC and avoiding the public ordeal of defending himself in what turned out to be a kangaroo court. Indeed, Einstein told him that the attack was so outrageous that he should simply resign. He could not understand why Oppenheimer would not simply turn his back on a country that would insult him so profoundly after all that he had contributed to it. But Oppenheimer was too loyal to his country to do that. Just as he continued to work within the government even though it did not take his advice, he decided that it was his duty to endure the hearing, to respect the processes of government, and to be as forthright as possible about his behavior. Tragically, he was crushed in the process, because his enemies had less respect than he for the rules and values of democratic government. The irony is that the man who was criticized by many of his colleagues for being too much of a pragmatist in working with government structures lost his last political battle because he acted the naïve idealist in a play written by the cynical power brokers.


Kevin Finneran () is editor-in-chief of Issues in Science and Technology.

The Myth of Energy Insecurity

The current national debate on energy policy is held together by the proposition that increasing reliance on foreign oil is a national security threat that requires urgent action. Only the character of the needed action is in dispute. Some call for the development of renewable energy sources and conservation, whereas others want increased drilling on public lands and in the Alaskan National Wildlife Refuge. That the contending parties agree on the problem might seem a basis for optimism. Unfortunately, they are united only in being mistaken.

The reality is that increasing oil imports do not pose a threat to long-term U.S. security. The intense concern with oil imports reflects a view of markets that has been rendered obsolete by globalization. Surging prices are driven not by malevolent forces but by largely positive developments in the world economy. Furthermore, oil price increases have the power to succeed where policy and rhetoric have failed, creating powerful incentives for overdue investments that have the long-term potential to increase the productive efficiency of firms, lower costs for consumers, and limit the adverse impacts of global climate change.

Consensus on the security threat rests on the dubious proposition that oil price changes (either sustained increases or sudden shocks) have serious impacts on the overall economy and aggregate welfare. Explanations of how disruptive price movements occur commonly rest on a very basic analysis (taught in introductory college economic courses, or Econ 101) that holds technology constant while growth in demand outpaces supply. Currently, demand is growing because of multiple factors that include the dramatic economic emergence of China and India and their heavy thirst for oil plus the continued growth in U.S. oil consumption. Under these conditions, a simple application of Econ 101 theory would indicate that oil exporters have the capacity to cripple the economies of import-reliant countries by sharply reducing supply and driving up prices. In the real world, however, there is much more to the story.

One problem with the Econ 101 argument is that it requires oil producers to behave in ways that would be as hostile to their interests as they would be to those of consumers. As a group, oil producers are concerned about prices not only in the present but also in the future. It does a producer little good to sell at a very high price today if the effect is to provide customers with an incentive to develop substitutes for use tomorrow. Producers who seek to maximize long-term revenue will want to maintain oil prices stable at the highest price that does not induce substantial investment in substitutes. From the producers’ standpoint, a particularly strong motivation exists to dissuade research investments by customers with an advanced ability to develop alternatives—the United States, for example.

The conjecture that leading oil producers care at least as much about future profits as they do about present ones is born out in both word and deed. Adel al-Jubeir, foreign policy advisor to Saudi Crown Prince Abdullah, offered this summary for the Wall Street Journal in 2004:“We’ve got almost 30% of the world’s oil. For us, the objective is to assure that oil remains an economically competitive source of energy. Oil prices that are too high reduce demand growth for oil and encourage the development of alternative energy sources.” Saudi actions in response to the recent surge in oil prices provide credence to this claim: From 2002 to the first half of 2005, the U.S. Energy Information Agency estimates that Saudi Arabia’s total oil production increased dramatically from 8.5 million barrels per day to 10.9 million barrels per day. Explaining why the Saudis have chosen to “help” the United States and other developed economies by boosting production does not require resorting to conspiracy theories. It only requires understanding Saudi self-interest.

It is a serious mischaracterization to portray oil-exporting countries as behaving in ways that are systematically or consistently hostile to the United States. According to the most recent data from the Energy Information Agency, the top 10 oil exporters to the United States are (in order, excluding the Virgin Islands) Canada, Mexico, Venezuela, Saudi Arabia, Nigeria, Angola, Iraq, Algeria, the United Kingdom, and Ecuador. Iraq, Nigeria, and Venezuela are sources of concern to the international community for different reasons, but the basic point is that the 10 countries look more like a random draw from the United Nations than a lineup of either U.S. antagonists or failing states.

Of course, like all commodities, oil is traded on global markets. U.S. import prices are determined by world aggregate output and demand, not simply by the output of countries that supply the United States. It is possible, perhaps even probable, that new leadership in one or more of the major oil-exporting countries at some point might be willing to put ideological objectives ahead of economic ones—acting “irrationally” from the standpoint of a profit-maximizing behavioral model. The primary economic weapon at the disposal of a rogue oil-exporting country acting in such a manner would be to sharply reduce exports in an effort to destabilize prices. But in doing so, the rogue state would drive prices marginally higher, punishing itself with reduced oil revenues overall while benefiting other oil producers and inducing consumers to change their behaviors. In short, the rogue would suffer, and users would substitute.

Petro-alarmism focused on the Middle East often takes a different angle, emphasizing the concentration of oil reserves and spare production capacity in Saudi Arabia in particular. Because shifts in prices are determined on a global scale, some assert that the market power of countries in the Middle East is greater than their total output numbers would suggest, given that they control as much as 90% of the world’s spare capacity. Saudi Arabia alone is believed to possess as much as 30% of the world’s proven oil reserves.

There are two problems with this argument. First, reserve numbers are themselves a function of price levels. At current prices, Canada’s tar sands make our northern neighbor a strong second in oil reserves. Second, and more to the point, reserves are only useful as a strategic weapon in pushing prices down, because they offer the potential of increased output. As just noted, only by withholding output can producers push prices higher. In that respect, Saudi Arabia is no different from other producers in its ability to affect prices unilaterally by restricting production and in so doing reducing its own revenues to the benefit of other producers. As a strategic instrument of aggression, spare production capacity and high levels of oil reserves are thoroughly underwhelming.

For those not persuaded by theory, a look at the historical record is instructive. Between 1981 and 1999, an Islamic fundamentalist regime consolidated power in Iran, terrorists killed more than 300 U.S. Marines in Beirut, al Qaeda staged multiple successful attacks on U.S. interests, including the first attack on the World Trade Center, and a Palestinian Intifada raged in the West Bank and Gaza. Yet oil prices (adjusted for inflation) trended downward throughout the period. The price fluctuations that did occur were by no reasonable measure greater in magnitude than the fluctuations in other commodities during that interval. Indeed, if anything, the numbers suggest that the price of oil is less volatile than that of other globally traded commodities.

Consensus on the security threat rests on the dubious proposition that oil price changes have serious impacts on the overall economy and aggregate welfare.

The lengthy period of decline in real oil prices had precisely the expected impact on the magnitude of U.S. energy R&D: It declined sharply. For example, R&D spending at the Department of Energy declined in real terms from a peak of $6 billion (in 2000 dollars) in 1978 to $1.9 billion in 2005. Indeed, investment in energy as a fraction of total R&D in the United States was by 2005 well below what it had been before President Carter proclaimed in 1977 a “moral equivalent of war” against energy dependency. In the particularly important area of automotive innovation, the substantial technological advances that occurred during these two decades were increasingly directed not at improving fuel efficiency, but rather at increasing the size and performance of vehicles, while keeping fuel efficiency roughly constant. With gasoline at $1.20 a gallon and total gasoline spending comprising less then 2% of a typical household’s budget during the past two decades, most U.S. consumers showed precious little concern about automotive energy efficiency when buying vehicles. Early-1980s visions of the average U.S. automobile ultimately achieving 100 miles per gallon yielded to a late-1990s reality in which sales of low-mileage sport utility vehicles surged, efficiency standards were stagnant, and fuel economy did not improve. Consumption of oil per dollar of gross domestic product grew to be 40% higher in the United States than in Germany and France, where, admittedly, prices are not only much higher (largely because of higher taxes) but travel distances are generally shorter.

The one prominent example of coordinated action by producers to control prices to the detriment of consumers was the 1973 oil embargo. Yet a host of facts undermine the claims that the oil price shock of 1973 caused the recession of 1973–1974. Oil as the single variable explanation is inconsistent with the fact that the U.S. economy rebounded in 1975, even as oil prices continued to rise. Policy decisions—in particular, monetary policy and Nixon-initiated price controls that spanned the period from 1971 to 1979—contributed significantly to the onset of the 1973–1974 recession. During this period, the economies of Europe and Japan, which were also hit hard by the embargo-induced price increases but had no price controls, did much better than the United States.

Although it is possible to construct a macroeconomic model in which oil shocks do cause recessions of the magnitude observed in 1973–1974, most models in the literature predict a substantially smaller effect. Generally accepted models suggest that a 100% increase in oil prices should lead to a 1% drop in aggregate output. Even this far-from-cataclysmic impact is likely an overestimate. In the 24 months before the spring of 2006, the U.S. economy was subjected an unprecedented surge in the price of oil. Still, gasoline prices remain lower in real terms than they had been in 1981. Tight world oil supplies were further affected by the war in Iraq and by Hurricane Katrina in 2005. Yet U.S. economic growth has continued on its upward trend, almost unaffected. Indeed, the increase in the price of oil may have rescued the economy from possible deflation, which was a concern three years ago. In sum, the evidence that the macroeconomy is vulnerable to oil shocks is not nearly solid enough to support the designation of “energy insecurity” as a national security concern.

The effects of globalization

As a general rule, prices are neither weapons of retribution nor harbingers of doom. They are signals that should convey information and guide choice, both in the market and in the policy system. When a price changes sharply, it is natural for a consumer to try to determine what signal is being sent and to adapt behavior accordingly. Uncertainty can lead to consumer anxiety, particularly when political leaders magnify the significance of events.

Oil price increases have the power to succeed where policy and rhetoric have failed, creating powerful incentives for overdue investments that have the long-term potential to increase the productive efficiency of firms, lower costs for consumers, and limit the adverse impacts of global climate change.

Current increases in the price of oil mostly reflect broader changes in the world economy that are driving sustained growth in oil demand. Years of low prices dulled incentives to use energy efficiently and develop new energy sources. Consequently, the capacity to produce, transport, and refine oil is now strained on a global scale. After decades on the sidelines, the world’s two most populous countries and a number of other developing countries are surging economically. Hundreds of millions of new entrants to the global middle class are seeking automobiles and other energy-consuming amenities. Although the manufacturing intensity of the U.S. economy has declined significantly during the past 20 years, manufacturing in China has grown dramatically. China accounted for 40% of the growth in world oil demand during the past four years, recently surpassing Japan as the world’s number-two oil consumer.

From the perspective of global human welfare, more people’s lives have improved more quickly in the past quarter-century than at any time in human history. Fortunately, that trend is not likely to be reversed any time soon. As a consequence, although demand growth may slow, the current level of demand for oil and other natural resources will decline only as the efficiency of energy use increases.

Despite the relentless media attention on rising gasoline prices and the political fallout, the impact of rising prices on consumers has been minimal. From 1980 to 2005, the share of consumer spending on energy actually dropped from 8% to 6%; the 2006 numbers will be higher, but certainly not high enough to signal a consumer calamity. The most recently published data from the Bureau of Labor Statistics indicate that the average household spends $1,333 on gasoline, or 2.6% of income, which is about 1/10 the amount spent on housing. Assuming comparable demand elasticities, an increase of 1% in average housing costs thus has the same impact on household disposable income as a 10% change in spending on gasoline. If the primary policy concern is that consumers with stagnant incomes will be hurt by inflation, a Dutch businessman buying an apartment in New York poses a far greater threat than does a Beijing resident buying gasoline to fuel his first automobile. Yet there are few calls for a national real estate policy.

Specific subpopulations are, of course, particularly vulnerable. A 15-year-old but still relevant study by Massachusetts Institute of Technology economist James Poterba suggests that about 10% of U.S. households spend more than 10% of income on gasoline. These vulnerable households tend to be low-income residents of rural areas. On the other side of the equation, stockholders and executives in U.S. energy companies gain substantially when oil prices go up. It is evident that such widely divergent distributional impacts constitute a significant short-term political and policy challenge. However, a short-term political challenge is not the same as a long-term security threat. Although there is ample reason to expect political leaders to be concerned about rising gas prices, from an analytical standpoint, it remains the case that energy insecurity is a myth.

If the dangers posed by increases in imports are mythical, why are they so widely believed to be real? In part, this is because no organized interest in the policy debate has an incentive to challenge the myth of energy insecurity. For the domestic energy industry (including ethanol producers), alleged threats posed by dependence on foreign oil lend support for an assortment of wealth transfers in the name of the “national interest.” For environmentalists, it provides a rationale for championing investment in renewables. Military hawks are drawn to the notion of energy insecurity because it offers a rationale for additional investments in weapons and personnel. Jihadists and anti-U.S. globalization protesters embrace the energy insecurity myth because it offers a clear and persuasive explanation for their belief in U.S economic imperialism.

This is not to say that arguments made to support claims of energy insecurity are entirely without merit. There is no doubt, for example, that the market for oil today is far more responsive to market fundamentals than it was in the early 1980s, the last time prices were at $3 per gallon (in real terms). Twenty years ago, the spare production capacity of the members of the Organization of Petroleum Exporting Countries (OPEC) was 15 million barrels per day, or about a quarter of global demand, reflecting the organization’s success in curbing members’ output to boost prices. Today, spare capacity is down to less than 2 million barrels per day, or about 2% of global demand. This means that the power of OPEC to reduce prices has diminished; these alleged foes are less able than they have been in the past to keep oil prices low. Of course, at the same time, the strategic oil reserves of the countries of the Organization for Economic Cooperation and Development have grown to more than 1 billion barrels. Oil-consuming countries are in a better position today than in the past to manage the impacts of serious supply disruptions without OPEC assistance. Additionally, technological advance has undoubtedly increased the adaptive capacity of the economy, as evidenced by the lack of an observed macroeconomic impact attributable to the current price upsurge.

A better rationale for being concerned about increasing oil prices is the mounting evidence that resource wealth— and, by implication, the increase of that wealth through higher resource prices—undermines the political development of resource-rich countries. Casual observers have long noted the apparent irony that places rich in natural resources are frequently poor in everything else. It is now apparent that this irony is actually a consequence of predictable distortions of microeconomic incentives that systematically undermine political and economic development. The so-called “curse” of oil is by now a well-established empirical regularity. Of the few countries that appear to have at least partially escaped the curse, most have done so by virtue of small populations; the elites that control the oil wealth make up a large enough share of total population that some degree of equity and stability appears to be achieved. Elsewhere, particularly in populous countries such as Indonesia, Iraq, Iran, Nigeria, and Russia, the corrosive effect of resource wealth on political development is evident. Because behavioral distortions increase when the relative price of the resource increases, it follows that oil price movements and democratic change will move in opposite directions.

That the curse of oil is real is not in debate. What is debatable is its security implications. Surely a responsible policy-maker would not want to fill an adversary’s treasury with petrodollars if it was possible to acquire resources from another supplier. This approach was taken in the faceoff with Saddam Hussein after the first Gulf War. Sanctions probably undermined the ability of the Iraqi regime to maintain the pace of development of some weapons programs, including weapons of mass destruction. Yet by almost any other measure, sanctions were a catastrophic failure. Amending the sanctions, via the United Nations’ oil-for-food program, to reduce their dramatic humanitarian costs resulted in corruption and mismanagement on a grand scale. The lesson learned was that coalitions of ostensibly well-intentioned countries seeking to enforce sanctions are little more effective than coalitions of apparently menacing countries seeking to enforce embargoes. In either case, enormous incentives to cheat undermine coordination and fuel corruption.

Another option for dealing with the curse of oil problem is to challenge the long-term economic viability of oil as a commodity by increasing investment in oil substitutes, preferably derived from an abundant and widely dispersed natural resource that is not itself subject to a future curse. This is a laudable goal, but the question is whether it can be accomplished, given the multiple obstacles that exist, including the fact that powerful economic and political interests in the United States benefit from high oil prices. By this path, we arrive at the capping irony that underlies the myth of energy insecurity: Rather than signaling doom, higher oil prices actually signal hope.

The panacea of high oil prices

Although increasing oil prices do not constitute a legitimate national security concern for the United States, they do create severe distributional inequities at home and undermine the development of democracy abroad. Yet distributional and political downsides notwithstanding, it is almost certainly the case that the benefits of higher prices actually outweigh the costs. In the long run, low oil prices pose a greater threat to national security than high prices.

Oil is a nonrenewable resource, so a shift to other energy sources must occur sometime. The question is when. Low oil prices encourage the deferral of needed investment. When oil prices collapsed in the mid-1980s, so did the market incentives and political will needed to invest in increasing energy efficiency. For a generation, thoughtful commentators recommended that the federal government substantially increase gas taxes, progressively raise vehicle mileage standards, and increase investment in energy efficiency. Under multiple administrations and different configurations of political leadership, the will to effect those changes was absent. Consequently, billions of dollars that could have gone to the U.S. Treasury as a means of changing the structure of energy consumption in the United States are now going to U.S. oil suppliers.

Deferred investments in energy efficiency pose a threat to U.S. national security for one paramount reason: potentially catastrophic climate change. The very real benefit of today’s increase in oil prices is that it may compel investments reducing the probability that tomorrow (that is, in 50 years) Nebraska will be parched, Manhattan will be under water, and coastal areas of the South will be depopulated because of increasingly intense and frequent hurricanes. Of course, the impacts of climate change are complex, and in this area just as in others, adaptations will occur and change will create winners as well as losers. Assigning probabilities to various scenarios is difficult. However, the worst-case scenarios of adverse impacts from climate change are much more severe, and not substantially less likely, than the worst-case scenarios from high energy prices. This fact alone should lead us to at least consider welcoming expensive oil as an antidote to perennial short-sightedness.

The bottom line is that in an open society with a market economy, only prices have the brute power to effect change on the scale required to address real and significant challenges to economic well-being. Today, prices are approaching the levels that 25 years ago induced serious investments in energy efficiency. Long-overdue behavior changes may now occur. And none too soon.

Archives – Summer 2006

Transport II

Transport II shows a theoretical simulation of the flow pattern for electrons traveling over a nanoscale landscape. The total area seen here corresponds in size to a typical bacterium and represents the tracks of about 200,000 individual electrons. Each electron, treated as a classical point particle, was launched from the center and given a unique starting angle. The angles were evenly distributed over 360 degrees. Each track built up grayscale density to any pixels it passed by, thus the darkest areas depict domains where many electrons traveled.

Eric Johnson Heller lives in Cambridge, Massachusetts. He is a member of the physics and chemistry faculties of Harvard University.

Power Play: A More Reliable U.S. Electric System

The United States ranks toward the bottom among developed nations in terms of the reliability of its electricity service. Catastrophic events, such as the August 14, 2003, blackout that put 50 million people in the dark, are well known, but that is only the most visible evidence of a problem that is pervasive in the U.S. electric system. Frequent small outages are endemic throughout the country. Although these might seem to be relatively minor inconveniences to homeowners, they can create serious problems for businesses. Other countries demonstrate that much greater reliability is achievable, and the U.S. nuclear power industry has demonstrated over the past three decades how vast improvements can be made in the United States.

The average U.S. customer loses power for 214 minutes per year. That compares to 70 in the United Kingdom, 53 in France, 29 in the Netherlands, 6 in Japan, and 2 minutes per year in Singapore. These outage durations tell only part of the story. In Japan, the average customer loses power once every 20 years. In the United States, it is once every 9 months, excluding hurricanes and other strong storms.

Despite decades of sober technical reports written by investigation teams in the aftermath of blackouts, the frequency of electric power outages in the United States is no less today than it was a quarter-century ago. Whether measured in terms of city-sized blackouts or smaller events, the statistics show that reliability has not improved. Indeed, if the data show any trend in the past few years, it is toward lower reliability.

The causes of outages in the United States show there is considerable room for improvement. If outages from major storms are excluded, the causes of each hour of outage include equipment failure (24 minutes), as in the 1965 Northeast blackout; untrimmed trees near power lines (6 minutes); and mistakes by power company personnel (4 minutes), as in the 1977 New York blackout and the 2005 Los Angeles outage. This history of blackouts creates ample public demand to increase reliability, opening a window of opportunity for the industry.

The frequency of electric power outages in the United States is no less today than it was a quarter-century ago.

Congress made an effort to boost reliability with a provision in the Energy Policy Act of 2005 that calls for the creation of an Electricity Reliability Organization (ERO), but the details of the plan make it unlikely that the new ERO will be capable of doing all that is needed. It is more likely that it will merely lock in place the status quo. The United States does not have to look far to find a better model for enhancing reliability. U.S. nuclear power producers have developed an extremely effective mechanism for improving the performance of the entire industry, and at least some of the lessons from that effort can be applied to the entire power industry

Effects of Power Outages

In a 2004 study, the Lawrence Berkeley National Laboratory (LBNL) estimated that the annual costs of U.S. power outages are at least $22 billion and may be as high as $135 billion. Most of the losses (72%) are borne by commercial customers, whereas industrial customers shoulder 26% of the loss and residential users only 2%. In the LBNL study, customer losses were estimated by consolidating a large number of independent utility outage-cost surveys. Interestingly, this work found that the costs of short interruptions of five minutes duration or less caused two-thirds of the economic losses.These short interruptions are probably the ones that can be most easily reduced by attention to reliability.

The rolling blackouts during 2001 in California provide concrete examples of what parts of our society are adversely affected by the loss of power. Only 40% of petroleum refineries in the state had any onsite power generation capability, and those affected by a blackout can require one to two weeks to resume operation. Spot shortages of gasoline occurred. At a large Internet retailer, one 20-minute outage deleted roughly 20,000 product orders and $500,000 in revenue when a backup power system failed during a rolling blackout. Some smaller companies have no backup power; the Wall Street Journal reported that Integrated Device Technology, a semiconductor manufacturer, estimated $50,000 in lost production and damaged products from a 2-hour blackout. Forbes wrote that SDL, a fiber-optic component maker, lost $3 million in product before it could purchase a backup system.Traffic lights were turned off in parts of San Francisco, causing fender-benders and gridlock for two hours. Apple and Hewlett-Packard engineers worked without desktop computers and crowded into offices with windows. Intel froze its California hiring temporarily, and Miller Brewing laid off 260 workers in Southern California for the duration of the rolling blackouts.

Where we’ve been

In 1962, as the scattered power systems in the eastern United States were about to be interconnected, 10 voluntary regional reliability councils were established to coordinate the planning and operation of generation and transmission facilities owned by their members. After the 1965 blackout, the U.S. Federal Power Commission recommended that a national reliability coordinating council be created, and in 1968 the North American Electric Reliability Council (NERC) was formed to coordinate the regional councils. One of NERC’s primary functions has been to develop voluntary reliability standards for the regional generation and transmission of power.

In January 1997, recognizing that the familiar landscape of rate-of-return regulation was about to be replaced by a competitive market for electricity, a NERC panel proposed federal legislation that would establish an electric reliability organization with power to establish and enforce mandatory standards. The U.S. Department of Energy endorsed that recommendation in 1998.

Seven years later, the Energy Policy Act of 2005 followed that recommendation, creating a new section of the Federal Power Act that gives FERC responsibility for reliability and the authority to certify an ERO. On March 30, 2006, FERC issued its final rule establishing the criteria that an entity must satisfy to qualify to be the ERO, including the ability to develop and enforce reliability standards. The Commission intends to certify one such ERO, which may (upon FERC approval) delegate its enforcement responsibilities to regional entities.

If all this sounds a bit like NERC and the regional reliability councils (of which there are now eight), that is not a coincidence. Four days after the final rule was published, NERC filed an application seeking Commission certification as the ERO. NERC hopes to be certified in time to implement mandatory reliability standards early in 2007.

NERC is the only organization proposing to become the ERO. It has requested that FERC approve the existing NERC voluntary standards as the first mandatory reliability standards adopted under the new legislation.

The current program is administered through the eight regional councils. As an example, the regional council responsible for reliability in Florida directs confidential annual self-audits and performs or directs periodic confidential triennial and spot audits, random checks, and investigations (the latter in response to a complaint or notice of a suspected violation). Monthly reports must be made on such items as transmission protection system misoperations. The entire ERO compliance audit program itself will be evaluated every three years by an outside group.

NERC proposes to continue its program of triennial reliability readiness audits (begun after the 2003 blackout). NERC wants these to “ensure that operators of the bulk electric system have the facilities, tools, processes, and procedures in place to operate reliably under future conditions.” The readiness reports, stripped of business-sensitive data, are to be made public. NERC also plans to compile and publish examples of excellent reliability practices noted during these audits.

The proposal states, “NERC’s budgeting and business plan development processes will be open and will extensively consider industry views. NERC’s independence in this area will be maintained by virtue of the board being the ultimate body to vote on and approve NERC’s and the regional entities’ budgets and business plans, prior to submission to the Commission for approval.”

This sounds promising, but the fact remains that independence is precarious for a body funded by the industry it is supposed to regulate, even with the caveat that FERC must approve the ERO’s submitted budget. The degree to which the ERO will be able to act to increase reliability depends on the seriousness with which reliability is taken by FERC and the ERO’s members, and on the balance they strike between profit and reliability expenditures in an often cutthroat competitive environment.

Nuclear success

On March 28, 1979, Reactor 2 at the Three Mile Island nuclear power plant suffered the meltdown of approximately half its core. “TMI shook the industry to its foundation, ending an age of innocence,” according to the chairman of the board of the Institute of Nuclear Plant Operators (INPO), which was formed within the year.

INPO’s mission is “to promote the highest levels of safety and reliability—to promote excellence—in the operation of nuclear electric generating plants.”

The nuclear electric power industry has achieved major improvements in its reliability. At the time of the TMI accident, U.S. nuclear plants were online 58% of the time. By 2004, they were producing electricity 91% of the time.

INPO is a big part of the reason for the improvement in reliability. When the nuclear industry was rocked by TMI and seven years later by Chernobyl, U.S. nuclear industry executives feared that their plants would be closed. They agreed on a major effort to avoid another mishap. They were given a not-too-gentle push when the Nuclear Regulatory Commission (NRC) shut down reactors operated by the Tennessee Valley Authority, Philadelphia Electric, and other companies until operations and equipment were improved.

INPO’s board of directors is made of up 12 CEOs and presidents of power companies. As the institute states, “The industry’s recognition that all nuclear utilities are affected by the action of any one utility motivated its commitment to and support for INPO.” The commitment has also made the plants much more reliable—and profitable.

INPO’s regular evaluations of nuclear electric generators are centered around comparing plant performance to metrics that emphasize safety and reliability. These metrics include the online time percentage, unplanned automatic interruptions, safety system performance, chemistry and fuel defects, industrial safety, and plant emissions. The metrics are developed jointly with the World Association of Nuclear Operators (WANO), and goals are set for each type of plant.

Not only does the use of metrics provide targets that can be incorporated into a plant manager’s compensation, they also allow the identification of early signs of performance decline in time to avoid service interruptions or mishaps. In exit meetings at the conclusion of plant audits, the sustainability of plant performance on the metrics is addressed explicitly. Members are provided with comparisons of their plants’ performance with metrics for the industry as a whole. The insurance industry has linked premiums to scores on INPO performance metrics.

INPO makes a distinction between regulations promulgated by bodies such as the NRC and performance objectives measured by metrics. The institute has found that reliability excellence can be achieved by a combination of the two. Industrywide performance objectives are difficult to meet every year, but provide goals and measurable outcomes; the NRC regulations provide a minimum floor for operations.

Despite competitive pressures in the industry, easy access to plant and equipment performance and operating experience is available on INPO’s secure Web site for members. As one of the conditions for institute membership, organizations agree to “share information, practices and experiences to assist each other in maintaining high levels of operational safety and reliability.” They agree to assist each other in benchmarking best industry practices.

INPO has recognized that the electric power–generating industry may not have all the answers to safe and reliable operations. One of their stated principles is to “use expertise and experience from outside the U.S. nuclear industry.” INPO has formed an advisory council with experts on aviation, insurance, finance, human performance, and organizational effectiveness from the commercial world and universities. They review institute activities and advise the board on objectives, and on methods to meet the objectives.

INPO involves equipment manufacturers and plant designers, who make up a supplier participant advisory committee. Through WANO, INPO brings experience from many countries to bear when performing its plant audits.

To ensure that recent industry experience is embedded within the institute, plant operators loan personnel to INPO and use INPO personnel on reverse loans.

INPO does not rely on its audit program alone. Special assistance is given to any member who requests it, or whose metrics are trending in a poor direction. These between-evaluation programs help to reverse undesirable trends and are prioritized to devote more resources to plants whose metrics show that help is needed. The team includes peers from other utilities who have handled similar problems well.

We see the key to INPO’s success as the agreement among all nuclear plant operators that one poorly performing plant presents a threat to the continuing operation of all nuclear operators. As the U.S. system becomes more interdependent, electric power producers using all fuels, not just uranium, are no longer masters of their own destiny. A shortage of generation in Akron can plunge New York into darkness.

The best large coal plants (1000 megawatts and above) operate 92% of the time (the same as the average nuclear plant), whereas the least reliable large coal plants operate less than 30% of the time. The average coal plant operates 60% of the time. Surely there is room for improvement. Although the future performance of the system is often dependent on the weakest link, the failure of a fossil fuel plant has nothing like the impact on public opinion that would result from a nuclear accident, so it is more difficult to command high levels of attention and concern from others across the industry. However, generator unavailability can still dramatically affect the grid. In the rolling blackouts that hit Texas in April 2006, roughly 20% of the generators in the state were unavailable because maintenance was being performed.

In its notice of proposed rulemaking for the ERO, FERC sought comment on which aspects of INPO’s programs would serve as useful models for the ERO and what lessons can be drawn from INPO’s complementary role with the NRC.

A third of the responders felt strongly that FERC had no business even discussing the idea. One went so far as to state that FERC was exceeding the scope of its authority by suggesting the establishment of an organization that deals with safety (the respondent ignored the 2005 congressional mandate for reliability).

A majority of those who commented had positive things to say about the INPO model. One group of large users of electricity pointedly advised FERC, “The Commission needs to overcome the tendency of economic regulation to tolerate mediocre behavior.”

A common theme among supporters of the INPO model is that enforcement of compliance with reliability standards should be separated from the collaborative functions that an INPO-like organization would undertake. Several felt that such a separation would be feasible within the ERO: Audits for compliance have a very different purpose than audits for excellence.

The ERO will fail to improve reliability significantly unless generators, transmission and distribution owners, and equipment makers are convinced that they face large penalties for substandard performance.

The periodic site-visit assessment of performance was thought to be a key to the success of such an organization, along with the sharing of equipment failure and operational error and event data. However, most felt that performance ratings and reports should be kept confidential. Several organizations noted that the imposition of sanctions by the ERO would have a chilling effect on information sharing within the ERO. The rotation of personnel and senior management involvement were both felt to be important in a best-practices organization.

To summarize, the industry responses to FERC’s question about possible lessons for the future ERO from the INPO experience included some who felt that the status quo in reliability is fine, some who felt that the regional reliability councils should (in some undefined manner) act as best-practices organizations, and some who felt that national best-practices groups for various segments of the electric power industry are necessary, but that compliance with minimum reliability standards and the achievement of excellence in reliability are two very different functions that must be kept separate.

Enshrining the status quo

NERC and the regional reliability councils began life 40 years ago in an environment of public outrage after the 1965 blackout. Outrage returned after the 1977 New York blackout, particularly when it was revealed that the root cause was a utility practice that left a single critical operator without the tools and training to stop a fairly normal occurrence from snowballing. After the outcry following the 2003 blackout, NERC adopted some of the techniques pioneered by INPO, taking steps in the direction of becoming a best-practices organization. For example, it has performed “readiness audits” (that are planned to be triennial) of generation plants, transmission operators, and independent systems operators. These audits have led to publicly posted examples of excellent practice, such as “The Salt River Project provides highly redundant and independent systems and power supplies at its control center that result in an extremely reliable and secure set of tools for its operators.”

The average coal plant operates 60% of the time. Surely there is room for improvement.

However, in proposing to become the ERO, NERC is morphing into a standards-setting and compliance organization, a role that is filled in other industries by various arms of the government. ERO and the NERC regional councils will thus be funded by the companies that will have to meet the ERO standards. We can thus expect to find that the ERO standards will be set by industry consensus, since a two-thirds vote of NERC members is required for adoption. The standards will vary by region in response to both regional technical differences and to the different characters of companies operating in the regions.

Some facets of NERC’s proposal are admirable. The record of the past quarter-century has shown that NERC and the regional councils have helped to slow the slide in reliability. By making NERC’s reliability standards mandatory, the ERO should be more effective. However, we worry that because the NERC standards were regional industry-consensus standards, their stringency has been limited by the influence of members with substandard performance and that such influence could continue in the future.

The TMI and Chernobyl incidents convinced nuclear plant owners that they were in immediate danger of having their plants closed and losing billions of dollars unless they could convince a skeptical public and Congress that they could operate safely. The INPO experience showed them that tough standards and cooperative efforts could make their assets more profitable and valuable.

The owners of coal- and natural gas–fired generators and of transmission and distribution lines have no reason to fear that a mishap would shut their plants. They might be tempted by the notion that tough standards and cooperative efforts would make their assets more valuable. The outage statistics for fossil-fuel plants and for transmission and distribution indicate that significant improvements can be made and that utilities may get important insights from pooling data. But the most significant reason for optimism is that the grid is getting more tightly integrated every year, so that a problem at a distant generator can cause a cascading failure that blacks out millions a thousand miles away.

The proposed ERO triennial compliance audits are a good and necessary function, although they might be performed more frequently. The procedure for outside evaluation of the compliance audit process is an excellent idea. However, NERC’s proposed penalties ($1,000 to $200,000 for violations of its reliability standards) are low. The U.S. Environmental Protection Agency has levied fines of $25,000 per day for infractions, and total penalties have been as high as $30 million. The 2003 blackout’s cost was estimated at $6 billion, which is 30,000 times the largest ERO penalty. Although provisions are made in NERC’s ERO proposal for fine multipliers in egregious cases, the typical fine will probably be too low to change company behavior.

Having recognized that human and organizational performance is often the root cause of incidents, INPO evaluates the performance of personnel with exercises in high-fidelity simulators during biennial plant evaluations. The checklist for evaluation includes organizational effectiveness and performance improvement. Corporate support of operating plants is evaluated explicitly during plant audits. Members agree to certain organizational expectations, such as making the senior nuclear executive in the line organization accountable in an unambiguous way for safe and reliable plant operation.

In contrast, the proposed ERO blackout and disturbance response procedures state that during investigations, the focus will be on technical aspects. The guidance given to investigation writers is that the conclusions and recommendations section should address “from a technical perspective, what are the root causes of this blackout? What additional technical factors contributed to making the blackout possible?” No mention is made of human or organizational factors.

The ERO should modify its investigation guidelines to stress human factors and corporate support of operational personnel. The disastrous consequences of a wrongly set relay in 1965, an overloaded and underinformed human operator in 1977, a sequence of operator errors and inaction due to lack of data in 2003, and a wrongly cut wire in 2005 should tell us that reliability improvements do not rest on engineering alone but also on social and organizational science.

The cost of the unreliable U.S. electric system is demonstrated through buying decisions: One out of every six dollars spent on electric power generation and delivery equipment goes for emergency backups. Significant savings could be achieved by making the primary system more reliable. Improving the reliability of the U.S. electric power system to the levels achieved in Europe and Japan requires a more stringent approach than compliance with consensus standards.

The ERO as proposed is a necessary, but not sufficient, condition for improvement. After the restructuring of the electric power industry, it is difficult to convince a company to invest in reliability. The ERO will raise the bar modestly by requiring compliance with existing voluntary standards. But converting weak standards from voluntary to mandatory is not likely to lead to the reliability improvement that is needed to raise the United States to parity with its competitors abroad. To get started quickly, we agree with making the current standards mandatory. However, these initial standards should be reviewed critically by FERC to ensure that they significantly improve reliability. FERC should also require the ERO to create a mechanism that would review all the standards over the next three years and propose modifications. These changes should then be approved in a single vote to prevent underperforming companies from weakening selective provisions. FERC should also require the ERO to revisit standards on a three-year schedule to avoid freezing standards at today’s level.

An alternative (and foolhardy) course of action is to wait for the next large blackout to stimulate emergency congressional action that is likely to be hasty, ill-conceived, crude, and ultimately ineffective or counterproductive. This is what happened when industry failed to take responsible action to control vehicle emissions in the 1960s. The 1970 Clean Air Act finally took action, but in a way that was needlessly costly to the industry.

We recommend that FERC provide leadership by acting on the knowledge that in a global economy, lack of reliable power puts the United States at a competitive disadvantage. Likewise, states should recognize that reliable power may put them at an advantage as compared to their neighbors. Reliable power is a public good, no less than excellent highways.

The road to reliability

Americans need better information on reliability. Power generators in New Zealand provide these statistics on the Internet. In the United States, a Freedom of Information Act request is required in many states to acquire these vital data. FERC (and the states for intrastate companies) should mandate that reliability data be available on the Internet for everyone.

Providing reliability data is an example of a transparent and easily understood metric. INPO has found such metrics to be critical to leading the nuclear industry out of the swamp of mediocre reliability. NERC has displayed only a sporadic commitment to making data on failures available to the public or to industry. For example, the failure database is out of date by more than three years as this is being written. A timely public database of all major disturbances is essential, as are data shared among the industry on equipment and operational failures.

A number of the industry’s own comments indicate that the roles of a standards-compliance organization and a best-practices organization are incompatible. We agree. FERC should require the formation of nationwide function-specific best-practices organizations that are not a part of the ERO’s standards and compliance organization. These could be created within the ERO (in a separate compliance function, just as INPO is separate from the NRC), or they could be part of an entirely separate organization. We favor the latter.

The best-practices organization should be responsible for continuing the readiness audits begun by NERC, because these are a collaborative function rather than a regulatory function. It should ensure that these site visits have as their purpose benchmarking each facility against the best metrics found in the industry.

The best-practices organization should follow INPO’s example of seeking advice from reliability experts in other industries as well as their own staff experts. In addition, representatives from the technical staff of state public utility commissions and from the best non-U.S. utilities should be invited to participate, as should representatives from equipment manufacturers, ranging from those who make relays and transformers to those who design software.

In addition to sharing experience about effective maintenance and operational practices, the organization should also share experience and insights about new technologies, such as underground equipment, automated failure recovery systems, and instruments to monitor, display, and control the flow of power.

The ERO will fail to improve reliability significantly unless generators, transmission and distribution owners, and equipment makers are convinced that they face large penalties for substandard performance. In the current deregulated environment, generators battle for even a slight cost advantage over their competitors and are reluctant to contribute to best-practice lists. Thus, any best-practice activity will need a firm regulatory incentive to compel all parties to cooperate.

FERC was right in September 2005 to ask what lessons INPO can teach the power industry. The commission should not accept the view that “nuclear is different” and should not be content with the easy course of simply designating NERC as the ERO. As presently constituted, such an ERO can do only half the job. The nation needs two organizations: one to enforce standards and the other to promote best practices.

A Healthy Mind for a Healthy Population

Each year, more than 33 million U.S. residents receive health care for mental problems and/or for conditions resulting from the use of alcohol, illegal drugs, or prescription medications. The total comprises approximately 20% of working-age adults, a nearly identical portion of adolescents, and 6% of children. Millions more people need care but for various reasons do not receive treatment. For example, although more than 3 million people aged 12 or older received treatment in 2003 for alcohol or drug use, more than six times that number—9% of this age group—reported abusing or being physiologically dependent on alcohol, illicit drugs, prescription drugs, or a combination of these.

Mental problems and substance-use conditions (M/SU conditions, for the sake of convenience) frequently occur together. They also accompany a wide variety of general medical conditions, such as heart disease, cancer, and diabetes, and thereby increase risk of death. For example, approximately one in five patients hospitalized for a heart attack suffers from major depression, and such patients are roughly three times more likely to die from their heart problems than are patients without depression.

Even among people who receive treatment for their M/SU conditions, many often get care that is contrary to what science has shown to be appropriate. Clinicians’ departures from evidence-based practice guidelines have been documented for conditions as varied as attention-deficit hyperactivity disorder, anxiety disorders, conduct disorders in children, depression in adults and children, opioid dependence, use of illicit drugs, comorbid mental and substance-use illnesses, and schizophrenia. These deviations from standards of care can result in significant harm to patients.

Collectively, M/SU conditions rank as the nation’s leading cause of combined disability and death among women and the second highest among men. The conditions also impose great costs on the economy through increased workplace absenteeism, “presenteeism”(attending work with symptoms that impair performance), days of disability, and significant work failures and accidents. Among children, the conditions adversely affect educational achievement. In sum, M/SU conditions make large and costly demands on the nation.

Clearly, the United States has failed to recognize the magnitude of M/SU conditions and to deliver adequate health care to people in need. To help remedy matters, the Institute of Medicine in early 2006 issued a report detailing the distinctive features of mental and substance-use health care and offering a comprehensive agenda for improving such care.

The report builds on a previous pioneering IOM report, Crossing the Quality Chasm: A New Health System for the 21st Century, issued in 2001, that reviewed the nation’s general health care system, chronicled its shortcomings, and called for its fundamental redesign. The new report, Improving the Quality of Health Care for Mental and Substance-Use Conditions, finds that its predecessor’s conclusions are equally true for M/SU health care. Overall, the system often is ineffective, untimely, inefficient, inequitable, and not patient-centered. At times, it is even unsafe. As is true of general health care, M/SU health care requires fundamental redesign.

In redesigning the system, the guiding principle must be that mental illnesses, substance-use illnesses, and general illnesses are highly interrelated, especially with respect to chronic illness and injury. Improving care delivery and health outcomes for any one of the three depends on improving care delivery and outcomes for the others. A corollary principle is that health care for general, mental, and substance-use problems and illnesses must be delivered with an understanding of the inherent interactions between the mind/brain and the rest of the body.

Redesigning the health care system that tends to people with M/SU conditions will require concerted actions by a host of parties. Among the parties to be involved and the actions needed are the following:

Individual clinicians. Clinicians treating patients with M/SU conditions should support their decisionmaking abilities, as well as their preferences for treatment and recovery. In today’s climate, misinformation and stigma lead many clinicians to underestimate their patients’ abilities to make decisions, to help in planning their treatment, and to carry out a plan for recovery. Such underestimation effectively undermines a patient’s “self-efficacy”: the belief that he or she is capable of carrying out a course of action to reach a desired goal. Promoting self-efficacy is key for many patients, as these beliefs are excellent predictors of how well an individual will perform the day-to-day actions necessary to successfully manage and live with M/SU conditions such as depression, bipolar illness, and alcohol dependence, as well as illnesses such as diabetes, asthma, and HIV. Self-management activities include, for example, monitoring illness symptoms; using medications appropriately; practicing behaviors conducive to good health in such areas as nutrition, sleep, and exercise; employing stress reduction practices; communicating effectively with health care providers; and practicing health-related problem solving and decisionmaking.

Clinicians can support their patients’ decisionmaking abilities and preferences in a number of ways. These approaches include incorporating informed patient-centered decisionmaking throughout their practices (including active patient participation in the design and revision of treatment and recovery plans), and supporting informed family decisionmaking when children are being treated. Clinicians also can establish and maintain formal linkages with community resources to support patient illness self-management and recovery—an important step, because an increasing amount of M/SU health care is taking place within community settings.

Clinicians should avoid coercing patients into treatment whenever possible. Such restraint has taken added meaning as new mechanisms for pressuring or compelling individuals to undergo treatment have evolved, including coercion from the criminal justice and welfare systems, schools, and workplaces. When coercion is necessary and legally authorized, clinicians should make sure that the care they provide is patient-centered. In such cases, clinicians should ensure that patients and their caregivers understand the policies and practices used for determining dangerousness and decisionmaking capacity; use the best available comparative information on safety, effectiveness, and availability of care and providers to guide treatment decisions; and maximize patient decisionmaking and involvement in the selection of treatments and providers.

HEALTH PLANS AND GROUP PURCHASERS OF TREATMENT SERVICES CAN CREATE POWERFUL INCENTIVES FOR IMPROVING QUALITY.

Among other actions, clinicians should conduct age-appropriate screening of their patients for comorbid mental, substance-use, and general medical problems. They should increase their use of valid and reliable patient questionnaires or other patient-assessment instruments to systematically assess progress and outcomes of treatment, and then use these measures to continuously improve the quality of the care provided. They should routinely share (with the patient’s knowledge and consent) information on patients’ problems and pharmacologic and nonpharmacologic treatments with other providers treating the patients. They should establish clinically effective linkages with other providers of mental health and substance-use treatment for care coordination, and they should coordinate their services with those of other human-services and education agencies.

On a broader level, clinicians should become involved in committees and initiatives working to promote and develop the National Health Information Infrastructure. This is a public-private effort now under way to improve health care providers’ ability to obtain information quickly on a patient’s health and health care and to share this information in a timely manner with other providers caring for the patient. The system will encompass electronic health record systems with decision support for clinicians, a secure platform for exchanging patient information across health care settings, and data standards that will make shared information understandable to all users.

Health care delivery organizations. In general, organizations should carry out the same practices as recommended for individual clinicians. This includes supporting their patients’ decisionmaking abilities and self-efficacy beliefs, and developing formal policies that will foster the involvement of patients and their families in the design, administration, and delivery of treatment and recovery services. Such direct contact with individuals with M/SU diagnoses in a collegial, equal-status setting is one of the most powerful tools for reducing stigma and discrimination. Organizations also need to develop formal policies that will ensure patient protection in cases where care has been coerced.

Among other actions that mirror recommendations for individual clinicians, organizations should screen their patients for comorbid mental, substance-use, and general medical problems. This is especially important for providers of services to high-risk populations; such providers include child welfare agencies, criminal and juvenile justice agencies, and long-term care facilities for older adults. Organizations also should establish formal linkages internally and with other providers of M/SU treatment, in order to ensure that patients have ready access to a seamless web of care. It will not be sufficient merely to make referrals to other providers or to establish ad hoc informal arrangements.

CLINICIANS TREATING PATIENTS WITH MENTAL PROBLEMS AND SUBSTANCE-USE CONDITIONS SHOULD SUPPORT THEIR DECISIONMAKING ABILITIES, AS WELL AS THEIR PREFERENCES FOR TREATMENT AND RECOVERY.

In addition, organizations should increase their use of patient questionnaires or other reliable patient-assessment instruments to assess the progress and outcomes of the treatment they provide. Patients are increasingly recognized as valid judges of the quality of their health care. They not only can provide direct feedback as to the effectiveness of treatment. They also can report on their experiences with care delivery processes, such as the extent to which they were able to participate in decisions about their own care and to gain skill in the self-management of their illness. Physicians and organizations in general health care already are using patient questionnaires to measure treatment outcomes. For example, the VF-14 questionnaire on eyesight asks patients about the amount of difficulty they experience in pursuing usual daily activities, such as driving and reading fine print. Many insurers require that the results of the VF-14 be used and reported as part of claims payment. Such consumer surveys may be an even more appropriate and valuable source of data on the outcomes of M/SU health care. Although laboratory tests or other physical measures, such as blood glucose levels or blood pressure, can measure outcomes of general health care accurately and easily, fewer laboratory or other physical examination findings can measure whether mental illness or drug dependence is remitting. Thus, patients are likely to be the best source of information on the extent to which their symptoms are abating and functioning is improved.

Organizational leaders also should get involved in efforts to develop health care data and information technology standards as part of the National Health Information Infrastructure, and they should encourage their staff members to get involved as well.

Health plans and purchasers. Health plans and group purchasers of treatment services, which help shape the environment in which M/SU health care is delivered, can create powerful incentives for improving quality. They can underpin efforts by clinicians and organizations to support patient self-efficacy, illness self-management, and patient recovery by paying for programs that meet evidence-based standards. Plans and purchasers should use M/SU health care quality measures in their procurement and accountability processes; provide consumers with comparative information on the quality of care provided by practitioners and organizations; and adjust their copayments, service exclusions, benefit limits, and other coverage policies in order to remove any barriers to or restrictions on effective and appropriate treatments.

Plans and purchasers also should participate in consortiums that promulgate quality measures for providers, organizations, and systems of care, and that advocate for a common, continuously improving set of M/SU health care quality measures. These measures should be understandable by multiple audiences, including consumers, group purchasers of health care, and quality-oversight organizations. Plans and purchasers should continually review the measures’ effectiveness in improving M/SU care.

In addition, purchasers should encourage the widespread adoption of information technology for M/SU care. They can do this in a number of ways. They can offer financial incentives to individual clinicians and organizations for investments in technology needed to participate fully in the emerging National Health Information Infrastructure. They can provide capital and other incentives for the development of virtual networks to give individual clinicians and small-group providers standard access to software, clinical and population data and health records, and billing and clinical decision-support systems. They can provide financial support for continuing technical assistance, training, and information technology maintenance. And as part of their purchasing decisions, they can include an assessment of how extensively clinicians and health care organizations use information technology for clinical decision support, electronic health records, and other quality-improvement applications.

Purchasers that offer a choice of health plans should find ways to reduce health plans’ incentives to limit the coverage or quality of M/SU care in order to avoid enrolling individuals with costly illnesses. Similarly, state governments should revise their procurement processes to give the greatest weight to quality of care. One promising way of doing this is to assign relatively low weight to a bid’s price-related dimensions and relatively higher weight to features that address quality of care. A second approach is to adopt a rate-finding process that sets a price for bids and then focuses the competition on the quality and service dimensions of performance. State and local governments also should reduce their emphasis on the grant-based systems of financing that currently dominate public M/SU treatment systems, while increasing the use of funding mechanisms that link some funds to measures of quality.

National associations of purchasers should decrease the burden of variable reporting and billing requirements by standardizing requirements at the national, state, and local levels.

State policymakers. State policymakers can facilitate improvements to quality by attending to the laws, regulations, and administrative practices that pertain to the confidentiality of patient information and to coerced treatment. They can do this by coordinating policy across governmental units responsible for general medical care and M/SU health care, as well as across the human services agencies with which these units interact. As one particular step, state governments should revise policies that create inappropriate barriers to the communication of information among M/SU health care providers and among these providers and general health care providers.

In their roles as purchasers, state governments should encourage the widespread adoption of electronic health records, computer-based clinical decision-support systems, computerized provider order entry, and other forms of information technology for M/SU care by taking the actions listed above for health plans and purchasers. State legislatures also should improve coverage for M/SU treatment by enacting a form of benefit standardization known as parity, which equalizes the benefits coverage of mental and substance use illnesses with the benefits provided for general medical illnesses.

Federal policymakers. Building the necessary infrastructure for quality improvement requires federal leadership. The Department of Health and Human Services (DHHS) must strengthen and coordinate the synthesis and dissemination of evidence on effective treatments and services that now takes place through multiple uncoordinated initiatives. This effort will lead to better use of scarce resources and help to alleviate some of the current confusion about what constitutes “evidence-based” care.

Toward this aim, DHHS should charge one or more entities with an interrelated set of tasks. One task would be to define, describe, and categorize current screening, preventive, diagnostic, and therapeutic M/SU interventions and develop electronic “codes” for each so they can be captured in routinely used data sets approved under the Health Insurance Portability and Accountability Act. A second task would be to rate the strength of the evidence on the efficacy and effectiveness of these interventions, categorize them accordingly, and recommend or endorse guidelines for their use. Armed with this information, the ultimate task would be to expand and strengthen efforts to disseminate proven evidence-based practices.

In these activities, the designated group or groups should work with the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality, and other opinion leaders and sources of expertise looked to in general health care. Involving general health care opinion leaders is particularly important, because the majority of consumers initially turn to their primary care providers for mental health services. Primary care physicians and physician specialists other than psychiatrists also prescribe the majority of psychotropic medications.

Given current federal fiscal constraints, DHHS likely will need to call on public- and private-sector structures and processes already in place to carry out these activities. However, the department will need to provide them with formal support and resources to enable and sustain their activities.

Among other jobs, the government should reexamine laws, regulations, and administrative practices that create barriers to sharing substance-use treatment information with mental and general health care providers also treating the patient. As a purchaser, the government should require all health care organizations with which it contracts to ensure appropriate sharing of clinical information essential for coordination of care with other providers treating their patients.

The government should encourage the adoption of electronic health records, computer-based clinical decision-support systems, computerized provider order entry, and other forms of information technology for M/SU care. This can be done by pursuing the same actions prescribed for all purchasers. In a complementary effort, DHHS should create and support a continuing mechanism to engage health care stakeholders in the public and private sectors in developing consensus-based recommendations to address unique aspects of information management related to M/SU health care. The department should then provide the recommendations to the standards-setting groups working with the Office of the National Coordinator of Health Information Technology.

Such actions are sorely needed, because the M/SU health care system lags far behind the general health care system in the use of information technology. For example, interviews conducted in 2003 with the directors of 175 substance-use treatment programs nationwide revealed that approximately 20% of the programs had no information services, e-mail, or even voice mail for their phone systems. Fifty percent had some form of computerized administrative information system for billing or administrative record-keeping, but these were typically available only to administrative staff. Thirty percent of the programs—mostly those that were part of a larger hospital or health systems—had seemingly well-developed information systems. But only three programs had an integrated clinical information system for use by the majority of their treatment staff. Psychiatrists as a group also are known to have lower rates of information technology support for patient care as compared with all physicians.

The government must act as well to ensure that the emerging National Health Information Infrastructure has adequate resources to address M/SU health care. Here again, the statistics are grim. In 2004, the government awarded $139 million in grants and contracts to promote the use of health information technology. But of the 103 grants awarded, only one specifically targeted M/SU health care. Thus, the government must take steps to direct more grants and contracts for the development of components of the National Health Information Infrastructure that relate to M/SU health care.

The government also has a key role to play in improving the quality of M/SU health care. The DHHS must provide leadership, strategic development support, and additional funding for research and demonstrations to establish the efficacy of various treatment methods as they become available. This initiative should coordinate the existing quality-improvement research efforts of the National Institute of Mental Health, National Institute on Drug Abuse, National Institute on Alcohol Abuse and Alcoholism, Department of Veterans Affairs, Substance Abuse and Mental Health Services Administration, Agency for Healthcare Research and Quality, and Centers for Medicare and Medicaid Services. It also should develop and fund cross-agency efforts in necessary new research. To that end, the initiative should address the full range of research needed to reduce gaps in knowledge at the clinical, services, systems, and policy levels and should establish links to and encourage expanded efforts by foundations, states, and other nonfederal organizations.

In addition, the government must act to help beef up the M/SU workforce. Although the diagnosis and treatment of general health conditions are typically limited to physicians, advanced practice nurses, and physician assistants, M/SU health care clinicians include psychologists, psychiatrists, other specialty or primary care physicians, social workers, psychiatric nurses, marriage and family therapists, addiction therapists, psychosocial rehabilitation therapists, sociologists, and a variety of counselors, including school counselors, pastoral counselors, guidance counselors, and drug and alcohol counselors. Congress should authorize and appropriate funds to create and maintain a Council on the Mental and Substance-Use Health Care Workforce. As a public-private partnership modeled on the Council for Graduate Medical Education and the National Advisory Council for Nurse Education and Practice, the council would develop and implement a comprehensive plan for strengthening the quality and capacity of the workforce to improve the quality of M/SU services. The government also should support the development of M/SU faculty leaders in health professions schools, such as schools of nursing and medicine, and in schools and programs that educate M/SU professionals, such as psychologists and social workers.

Accreditors of health care delivery organizations. By their very definition, accreditation groups can create incentives for M/SU health care organizations to make needed improvements. In particular, accreditors should adopt standards that mesh with the above recommended policies and practices. For example, accreditors should require that organizations have in place policies that encourage informed, patient-centered participation and decisionmaking throughout their care, including treatment, illness self-management, and recovery plans. Moreover, accreditors should incorporate into their standards any competencies or requirements established by the proposed Council on the Mental and Substance-Use Health Care Workforce.

Institutions of higher learning. To better prepare the workforce to function in a work environment that more aggressively pursues quality improvement, institutions of higher education should place much greater emphasis on interdisciplinary learning and should bring together faculty and trainees from their various education programs. They also should facilitate and assist the work of the Council on the Mental and Substance Use Health Care Workforce.

Funders of research. Public and private sponsors of research on M/SU and general health care should focus on several priority areas. For example, they should support the development of reliable screening, diagnostic, and monitoring instruments that can validly assess response to treatment. These instruments should include a set of M/SU “vital signs” comprising a brief set of age- and culturally appropriate indicators for monitoring patient symptoms and functional status. The indicators must be suitable for use in screening and early identification of problems and illnesses and for repeated administration during and after treatment. Funders also should support the development of strategies to reduce the administrative burden of implementing quality-monitoring systems, as well as the development and refinement of methods for providing information to the public on the effectiveness of a range of interventions.

In addition, funders should devise health services research strategies and innovative approaches that address treatment effectiveness and quality improvement in usual settings of care delivery. To that end, they should develop new research and demonstration models that encourage local innovation and create a critical mass of partnerships involving researchers and stakeholders. Stakeholders should include patients, parents or guardians of children, clinicians, organization managers, purchasers, and policymakers.

Finally, DHHS, in collaboration with other government agencies, states, philanthropic organizations, and professional associations, should create or charge one or more entities as national or regional quality-improvement resources. They would test quality-improvement practices, disseminate knowledge about the practices, and provide technical assistance and leadership across public- and private-sector M/SU health care settings.

Across every sector of society, evidence of the effects of mental and substance-use problems and illnesses on each other and on general health continues to accumulate. But there is a way forward that promises to reduce the toll.

To gain the fullest measure of success, participants in every sector of the health care system must commit to action. That is the hope. Still, success need not be an all-or-nothing proposition. Individual clinicians and organizations acting alone can bring about significant improvements in the quality of care they provide to the people they serve. It would be a start—and it might point the way to even greater rewards as their counterparts gradually join in.

Forum – Summer 2006

Nuclear amnesia

Jack Mendelsohn is certainly right about “nuclear amnesia” (“Delegitimizing Nuclear Weapons,” Issues, Spring 2006). Even in official administration documents, there is a rather casual attitude about the possibility of using nuclear weapons in limited conflicts. I hope his article will be widely read.

The threat of nuclear-armed terrorism is real and it must be addressed by means other than classical Cold War deterrence theory. Mendelsohn’s appeal for “delegitimizing” nuclear weapons is designed to scale back the appeal of nuclear weapons as instruments of power or prestige. More nuclear weapons in the hands of more nations will lead to weapons in the hands of terrorists at some point. The ideas Mendelsohn presents would raise barriers against that process.

I particularly liked his discussion of nuclear weapons testing. A moratorium on testing is a flimsy device to prevent future testing, but it is better than nothing. Pushing ahead with Reliable Replacement Warheads would jeopardize even that flimsy barrier.

I think that Mendelsohn also has written a very convincing case for why “a nuclear war cannot be won and must never be fought,” to cite Ronald Reagan’s judgment about nuclear weapons. Paul Nitze once told me that nuclear weapons should not be used “even in retaliation—and especially in retaliation.” This opinion was not derived from a pacifist outlook but from a considered judgment about the effect of a U.S.-Soviet nuclear exchange.

Mendelsohn says that the United States should “remove nuclear weapons from the quiver of threat responses and war-fighting scenarios.” But he evidently supports their potential use as “weapons of last resort.” If there is a use for them in retaliation or in a case where the survival of the nation is at risk, then some planning will have to be directed toward their use in “threat responses.”

This leads me to a point that has troubled me about declarations about use policy; Mendelsohn’s quotation ascribed to Linton Brooks sums it up: “We can change our declaratory policy in a day.” My concern is that a policy that can be changed in a day is no substitute for physical changes that make nuclear use less likely. Radical reductions in warhead inventories have a long-term effect on how nuclear weapons are viewed in war planning. Placing less reliance on prompt launch procedures also has this effect. I share Mendelsohn’s basic outlook, but I have always believed that the principal objective of U.S. policy should be to work toward progressively lower levels of nuclear weapons and fewer weapons on high alert. I suspect that Mendelsohn agrees with this but has little hope that it can be achieved. He may be right, but I hope not.

JAMES GOODBY

Washington, DC

Ambassador (Ret.)


Jack Mendelsohn is right to emphasize the desirability of the United States delegitimizing the use of nuclear weapons and abandoning its nuclear first-use policy. His arguments also apply to the other four established nuclear-weapon powers (China, France, Russia, and the United Kingdom), all of which are modernizing their nuclear forces. (Only China has declared a no-first-use nuclear policy.) Making qualitative improvements in nuclear weapons is usually much more destabilizing than increasing the number of nuclear weapons in the arsenals.

The UK government has said on several occasions that it will need to make a decision on the future of the British strategic nuclear deterrent during this Parliament—that is, before 2009–2010. The United Kingdom operates Continuous-at-Sea Deterrence, which requires four Trident strategic nuclear submarines to ensure that there is one at sea at any given time. The current UK government seems intent on replacing the Trident system without putting forward any convincing arguments for keeping a nuclear force. Moreover, Britain has threatened, like the United States, to use nuclear weapons preemptively in some circumstances.

The UK nuclear policy is described in the governmentÕs 1998 Strategic Defence Review. As Commodore Tim Hare, the former Director of Nuclear Policy in the British Ministry of Defence, says (Royal United Services Institute Journal, April 2005, page 30): ÒThe policy makes it clear that the role of nuclear weapons is fundamentally political and that therefore any rationale for their retention is political. The UK does not possess nuclear weapons as part of the military inventory, they have no function as war-fighting weapons or to achieve lesser military objectives.Ó

This statement from an extremely well-informed and authoritative source brings home the fact that there is no military reason for the United Kingdom to retain its nuclear weapons and to replace its Trident submarines. Why then is it virtually certain that Britain will replace them rather than doing what Mendelsohn recommends: completely delegitimizing its nuclear weapons by doing away with them?

Many suspect that the main, if not the only, reason is that the UK government believes that the possession of nuclear weapons is necessary to keep BritainÕs permanent seat on the UN Security Council: each of the five permanent members is a nuclear-weapon power. The other argument often made is that the United Kingdom already has nuclear weapons and it would be unwise to give them up in a world in which the future is uncertain. Those who make this argument usually admit that if the United Kingdom did not actually have nuclear weapons, it would not now acquire them.

Britain faces no significant external military threat and has a very close friendly relationship with the United States, the world’s only superpower. If Britain cannot give up its nuclear force today, it is unlikely ever to do so. Nor, so far as I can see, will any other nuclear-weapon power. Much though I wish it were otherwise, I am afraid that Mendelsohn, eminently sensible and desirable though his ideas are, is, for the foreseeable future, whistling in the wind.

FRANK BARNABY

Oxford, England


Respect for China

In “Don’t “Dis” Chinese Science” (Issues, Spring 2006), Alexander P. De Angelis characterizes the U.S. government’s apparatus for focusing on China’s science and technology (S&T) policy as “woefully inadequate and scattered.” It is not the apparatus so much as the blindness of the administration to the power of science to build bridges, even in times of great hostility abroad to U.S. policies. This blindness is accompanied, as De Angelis points out, by the administration’s abysmal failure to press Congress for adequate funding for our science diplomacy and for the cooperative research programs that should be supporting it. As for apparatus, George Atkinson, science advisor to the Secretary of State, does yeoman work in the cause of our S&T relationships around the world. In early April 2006, he spent several weeks visiting a broad range of Chinese research establishments. He sees the great opportunities for U.S. science to collaborate with nations like China and India, whose scientific achievements are growing very fast, as well as with our traditional friends in Europe and Japan.

Ronald N. Kostoff at the Office of Naval Research and his coauthors have made detailed studies of the quantity, scope, and quality of research in both China and India. Americans are relatively familiar with the high achievements of science in India, probably because we share a common language. Kostoff notes that from 1980 to 2005, India’s output of research articles (science and social science Citation Index references) grew from 10,000 to 25,000. During that same period, he reports, Chinese research output (measured the same way) grew by a factor of 100!

The power of science to open doors and smooth out hostile feelings about other countries is illustrated by some very modest efforts in the past that had very large rewards. Yet sometimes successful private initiatives, assisted by government funding, lose that support when the agencies assume that relations are “normal” and the initiatives will take care of themselves. De Angelis cites the Committee on Scholarly Communications with the People’s Republic of Science (CSCPRC), which I chaired, following Eleanor Sheldon in the 1970s. De Angelis notes that once diplomatic relations were established between the United States and China, government agencies started cutting back their support of CSCPRC, just when it could be most effective. A similar problem faces us today, when the fear of terrorism and all things foreign has us looking inward, not out toward the opportunities to build new relationships that can make us safer and more secure. Surely the maturing of China and India must shake us out of our complacency and chauvinism, which are too often masked as patriotism.

LEWIS M. BRANSCOMB

School of International Relations and Pacific Studies

University of California at San Diego


Alexander P. De Angelis certainly got it right in his commentary about the sluggishness of the U.S. government in responding in a concerted fashion to the emerging technological prowess of China. Almost three decades have elapsed since the signing of the original Sino-U.S. bilateral agreement for cooperation in science and technology (S&T). It seems that while Washington has tended to view the S&T cooperative accords as the “icing on the cake” in terms of America’s relations with China, the Chinese have viewed S&T cooperation with the United States as the cake itself! Since the visit of Deng Xiaoping to the United States in 1979, the Chinese leadership has seen the country’s growing international S&T relations in very strategic terms; since the full launch of the “open policy” in the early 1980s, S&T ties with countries such as the United States and Japan have been treated as an essential ingredient in China’s efforts to close the prevailing technological gap between itself and the industrialized nations as represented in the Organization for Economic Cooperation and Development.

In contrast to the impression left by some current observers of China (both inside and outside the U.S. government), however, there has not been nor is there now a Chinese conspiracy or hidden agenda lurking beneath the surface of China’s stated foreign policy initiatives or S&T policies. The Beijing government has made it clear to the outside world since the announcement of the so-called “four modernizations” in the late 1970s that advances in S&T were the key to the modernization of agriculture, industry, and national defense in China. In addition, the leadership has made no secret of its interest in importing foreign scientific and technical knowledge to upgrade national R&D capabilities in universities, the Chinese Academy of Sciences, and industrial enterprises. The fact that few, if any, took the Chinese stated intentions seriously until the past 3 or 4 years is an error of omission that may yet come back to haunt the United States as the Chinese march ahead in their quest to join the world’s leading nations at the frontiers of scientific discovery and technological innovation.

Perhaps the best example of missed opportunities was the withdrawal of direct government support for the U.S. China Management Training Center in Dalian in the mid-1980s. With the U.S. Department of Commerce as the lead agency, the United States had been a leader in working with the Chinese to inaugurate one of the first bilateral programs focused specifically on management education for China’s emerging new enterprise and government leadership. However, it seems that neither the Congress nor the Commerce Department was about to come up with the $80,000 to $100,000 needed to continue the program, and thus after several successful years it almost collapsed. Were it not for the vision of the management school at the State University of New York at Buffalo, which took over the running and financing of the program at the point of its near demise, the entire effort might have simply disappeared. Today, whereas the European Union (EU) has spearheaded the founding and supported the operation of the China Europe International Business School (CEIBS) and has helped to make it into to one of the finest management schools in the Asia-Pacific region, the United States is conspicuous by the fact that it lacks a similarly supported institution in China.

Of course, there are a large number of scientific exchanges and cooperative R&D projects taking place through universities and other private auspices in the context of the overall expanding ties between the United States and China over the past several decades. The benefits of these ties in terms of building trust and confidence between and among members of the Chinese and American scientific and engineering communities should not be ignored. But, as De Angelis suggests, the U.S. government does not seem to recognize the enormous opportunities that exist for win-win outcomes by building stronger and deeper S&T collaboration with China. As a result, the United States no longer seems to be at the top of the Chinese list of preferred bilateral partners. On a recent trip to Beijing this year, I was told quite explicitly that “the United States is now number 4, below the EU, Russia, and other international S&T organizations in terms of S&T cooperation.” Moreover, a young Chinese official speaking quite candidly confided in me that she had aggressively sought to work in the government department concerned with U.S. S&T cooperation, because in the past it had been a fast-track path to career advancement. To her great chagrin, however, that no longer seemed to be the case because of the unmet expectations from the limited nature of bilateral S&T cooperation with the United States.

Now, some may cheer that this is all to the good because the United States seems to have protected its “crown jewels” in terms of scientific and technological assets. However, the reality is that China’s S&T system continues to move ahead with its plans to strengthen indigenous innovation, build world-class universities, and revitalize organizations such as the Chinese Academy of Sciences through important initiatives such as the Knowledge Innovation Program (KIP). Don’t get me wrong—many problems continue to persist across the S&T landscape in China, such as the country’s still-immature regime for protecting intellectual property and the shortage of experienced managerial talent to operate China’s growing number of R&D and engineering centers. Nonetheless, in a world of globalization, where national systems of innovation are giving way to a more global system of new knowledge creation and commercialization, collaboration and cooperation have become the new hallmarks of success. The fact that there are now over 750 foreign corporate R&D centers in China is testimony to the fact that the private sector in the United States and abroad understands this and is not shy about pursuing access to China’s brainpower as a way to enhance their own innovation potential. It is time for the U.S. government to move its attention from the icing and on to the cake, and to find ways to strengthen the potential for broader and more sustainable engagement in the S&T field with China. This means cultivating a cohort of specialists on S&T policy and programs in China as well as offering a more coherent vision, stronger leadership, and greater funding from the White House and Congress as we seek to figure out the precise parameters of the Sino-U.S. relationship in the years ahead.

DENIS FRED SIMON

Provost and Vice-President for Academic Affairs

Levin Graduate Institute

The State University of New York

New York, New York


Iran’s nuclear threat

“Controlling Iran’s Nuclear Program” by Joseph Cirincione (Issues, Spring 2006) makes little mention of the two issues that today motivate all Iranian factions to want nuclear weapons. The first is the fear of attack by the United States, given the U.S. aggression against another member of the “Axis of Evil”: Iraq. The second, mentioned in the last paragraph, is the presence of a potent nuclear power in the Middle East—Israel—a country that has been in defiance of the United Nations for more than 30 years.

Although it is to be hoped, as emphasized by Cirincione, that the present crisis can be settled by diplomacy, in the long run the only solution is a regime change in the United States to a government dedicated to the rule of law. As stated in Cirincione’s last paragraph, the goal should be a nuclear weapons–free zone (NWFZ) in the Middle East. Although he states that the United States has long supported such a policy, the fact is that the United States has never done anything to discourage the Israeli nuclear weapons program. The United States has limited tools to pressure Iran, but, given the dependence of Israel on U.S. support, there is the possibility of inducing Israel to gradually eliminate its nuclear arsenal as part of a NWFZ agreement.

LINCOLN WOLFENSTEIN

University Professor of Physics Emeritus

Carnegie Mellon University

Pittsburgh, Pennsylvania


Nowadays, it is extremely difficult to find a balanced article about Iran’s nuclear program. Joseph Cirincione has written one such article, for which he should be thanked. There are, however, a few points in the article that I would like to comment on.

First, Cirincione states that, “The danger [of Iran’s nuclear program] is that a nuclear-armed Iran would lead other states in the Gulf and Middle East, including possibly Saudi Arabia, Egypt, Syria, and even Turkey, to reexamine their nuclear options.”

This is, of course, a subject of many ongoing debates, for which no definitive conclusion has been reached. For the record, however, I point out that, as a member, Turkey is protected by the NATO alliance. Syria has been Iran’s strategic ally for 25 years and, thus, why would it be worried about a nuclear-armed Iran, when it is still in a formal state of war with Israel, which still occupies a part of its territory? Egypt, far from Iran and never threatened by it, should be worried about nuclear-armed Pakistan, the main source of Islamic radicalism. Iran’s relations with Saudi Arabia are even better than when the Shah was in power. Saudi Arabia is protected by the United States, anyway.

Perhaps Cirincione should address the following key issue: Why does he believe that these Arab nations might seek nuclear arms if Iran develops a nuclear arsenal, when they never tried to do so while they were (in Egypt’s case), or still are (in the case of Saudi Arabia and Syria), in a formal state of war with nuclear-armed Israel? More important, would the United States allow, for example, Egypt to do so, when it relies so heavily on annual U.S. economic aid just to barely survive?

Second, Cirincione argues that it is not economical for Iran to have a uranium enrichment program. Iran’s estimated uranium ore deposits are at least twice what Cirincione quotes, but it is also not prudent to consider the economics of enrichment in isolation, as it is only one component of the complete fuel cycle and a small part of nuclear reactor–generated electricity. It was also claimed for years that it is not economical for Iran to have a nuclear energy program, but as my article in the Winter 2005 Harvard International Review demonstrated, it is indeed economical. Moreover, (a) Japan and the European members of the URENCO consortium have no natural uranium deposits, but have vast uranium enrichment programs. (b) In the 1970s, the Shah signed an agreement with the Soviet Union whereby, in return for receiving natural gas, the Soviets built Iran’s first steel plant. The Shah was told that it would be much cheaper for Iran to import steel. Today, Iran produces 75% of its steel. © Cirincione laments that Iran is not willing to rely on Russia for nuclear fuel. There is historically deep mistrust of Russia in Iran. Russia took over by force large parts of Iran’s territory in 1813 and 1827 and never relinquished them. It helped the counterrevolutionaries during Iran’s Constitutional Revolution of 1906–1908. The Soviet Union refused to evacuate parts of Iran at the end of World War II, until it was pressured by the West. Iranians also see how easily Russia shuts off its natural gas pipelines to Ukraine, Western Europe, and Georgia.

Third, it is often said, as Cirincione also mentions, that the world does not trust Iran because it hid its Natanz and Arak facilities for 18 years. But Iran’s only obligation was to inform the International Atomic Energy Agency 180 days before it introduced any nuclear materials into those facilities, which it did. It is also erroneously said that Iran has violated the provisions of the Nuclear Non-Proliferation Treaty (NPT). The only way to do so is by either using nuclear facilities to weaponize or by secretly helping another nation to do so; Iran has done neither. It has only been found in breach of its Safeguards agreement, a far cry from violating the NPT agreement. South Korea, Taiwan, and Brazil have committed far more serious violations than Iran.

Fourth, people across Iran’s political spectrum believe that the United States never recognized the legitimacy of the Iranian revolution of 1979 and has been trying ever since to overthrow the regime. They also worry about Talibanization of Pakistan. If President Pervez Musharraf is assassinated and the Taliban’s sympathizers in the military take over, they will pose a grave danger to Iran’s national security. So long as such legitimate concerns are not addressed, no Iranian government, regardless of its political leanings, would dare to abandon the nuclear program. But Cirincione makes only a passing reference to these all-important issues at the very end of his fine article.

The crux of the issue is not, as Ciricione states, whether “other nations trust that Iran’s program is, as they claim, peaceful.” In the absence of any credible security guarantees, Iranians perceive the capability to enrich uranium as vital for Iran’s national security (as does North Korea). But such a capability would make Iran, a large country with rich resources located in the most strategic area of the world, unattackable; a prospect not acceptable to the United States. Now, that is the real crux of the issue.

MUHAMMAD SAHIMI

NIOC Professor of Petroleum Engineering

Professor of Chemical Engineering

University of Southern California

Los Angeles, California


Aquaculture and the environment

As a team leader working “on the ground” of a major interdisciplinary, multi-institutional effort to demonstrate the technological feasibility of offshore aquaculture in the southeastern United States and Caribbean regions, I would like to provide a few comments in rebuttal of some of the criticisms expressed by Rosamond L. Naylor in “Environmental Safeguards for Open-Ocean Aquaculture” (Issues, Spring 2006) and in support of the National Offshore Aquaculture Act of 2005.

Naylor’s claim that the ecological effects of marine aquaculture have been well documented is correct, but the two references she cites are biased, offering a one-sided, negative view of the activity. There have been numerous reports and publications showing that the ecological impacts caused by net-pen aquaculture are low when compared to the yields produced.

Naylor claims that there are no environmental safeguards in place for obtaining permits for open-ocean aquaculture in the United States only because she never applied for one. During our permit application for the development of one such project in Puerto Rico in 2001, we had to fulfill the requirements of 13 agencies [including the U.S. Environmental Protection Agency, National Oceanic and Atmospheric Administration (NOAA), and Fish and Wildlife Service], each one competently justifying its existence. When applying for the permits for the expansion of the project in 2005, the list of agencies involved increased to over 20. The permitting process is complex, lengthy, and expensive, requiring a great deal of scientific and legal expertise. The Offshore Aquaculture Act of 2005 proposes to organize the permitting process, with NOAA as the leading agency centralizing applications. This is clearly the right and sensible path to follow.

THE OFFSHORE AREAS OF THE UNITED STATES AND ITS ISLANDS AND TERRITORIES HAVE EXTRAORDINARY POTENTIAL FOR THE DEVELOPMENT OF AN ENVIRONMENTALLY SUSTAINABLE OFFSHORE AQUACULTURE INDUSTRY.

The offshore aquaculture demonstration projects currently being conducted in Puerto Rico, Hawaii, New Hampshire, and the Bahamas have completely submerged cages stocked with hatchery-reared fingerlings of endemic native species such as cobia, snapper, amberjack, moi, and cod: species whose fisheries are mostly depleted. After four years, we have enough data to show that the nutrients and suspended solids generated by the cage systems do not dramatically affect the oligotrophic offshore environment, because of its carrying capacity. There were no significant differences in any of the water-quality parameters measured in the area surrounding and beneath the cages. The reports are available, and resulting manuscripts are being submitted for publication and/or have been published (references are available on request).

Naylor is a respected environmentalist, and most of the suggestions offered in her article are admittedly sound. However, none of the suggestions is new to aquaculture scientists or the industry. Most have been or are already being implemented by NOAA and other agencies involved in the process.

The offshore areas of the United States and its islands and territories have extraordinary potential for the development of an environmentally sustainable offshore aquaculture industry. We are ahead of the world in technology for open-ocean aquaculture and cannot afford to lose the edge as we are already doing in other fields. U.S. entrepreneurs and venture capitalists are interested in investing in the industry, but in light of the negative perception that Naylor and many other environmentalists are selling to the public, are already beginning to look abroad. We must simplify the process and move ahead with this legislation so as to keep the industry within our control, and the National Offshore Aquaculture Act is the first step toward U.S. autonomy in seafood supply.

Naylor and other environmentalists are also quick to criticize what they hear and read about what a handful of U.S. entrepreneurs are doing to develop a new, environmentally sustainable, and economically viable industry that will help alleviate our dependency on seafood imports and reduce an escalating trade deficit that currently is almost $10 billion per year. We have created the opportunity in the United States and must capitalize on it. Moving the industry offshore is the right path to the development of a low-impact high-yield industry that will produce most needed seafood while creating jobs and other socioeconomic benefits. Beyond economics, the importance of developing the offshore aquaculture industry in the U.S. Exclusive Economic Zone may become a matter of national food security soon. We cannot afford not to do it.

DANIEL D. BENETTI

Chairman, Division of Marine Affairs and Policy

Associate Professor and Director of Aquaculture

Rosenstiel School of Marine and Atmospheric Science

University of Miami

Miami, Florida


Rosamond L. Naylor issues a timely call for high standards for open-ocean aquaculture. Aquaculture is being pushed hard by industry and the federal government as a solution to the U.S. seafood deficit and the crises facing our oceans from depleted fisheries, pollution, and habitat destruction. However, two recent national ocean reports and numerous studies by Nay-lor and others document the significant risks posed by ocean fish farming from pollution; the use of chemicals, drugs, and fishmeal; the genetic alteration of wild fish stocks; the spread of disease and parasites; and others listed in the article. If not properly and sustainably managed, with precautionary safeguards, aquaculture may become yet another cause of ocean degradation.

It is therefore extremely disappointing that the Bush administration and Sens. Ted Stevens and Daniel Inouye have introduced federal legislation (S. 1195) that utterly fails to provide such safeguards.

At a recent Senate hearing on S. 1195, the Bush administration continued to insist that environmental concerns be addressed in regulations issued after the bill is enacted. This completely ignores the precautionary approach touted by the National Oceanic and Atmospheric Administration (NOAA), the same federal agency that wrote S. 1195. Nor are other federal agencies stepping up to the plate. The Environmental Protection Agency recently issued effluent guidelines for ocean fish farms without any numeric limits on pollutants commonly discharged by those farms (such as fecal coliform bacteria, pesticides, drugs, nitrates, phosphates, metals, and suspended solids).

Congress should propose specific standards to guide the development of a sustainable fish farming industry, such as is currently being considered in California. In 2003, California banned raising salmon, non-native, and genetically modified organisms in state ocean waters (which extends three miles offshore) to protect native fish stocks. This year, California is going one step further and enacting legislation (SB 201), sponsored by The Ocean Conservancy and State Senator Joe Simitian, to provide comprehensive standards that address specific risks posed by farming native species. These standards were developed through extensive negotiations with nongovernmental organizations, the state legislature, the Schwarzenegger administration, and industry to establish a transparent process to minimize pollution and the use of drugs and chemicals; select appropriate lease sites; prevent disease and the escape of farmed fish; avoid conflicts with fishing and other public trust uses; use sustainable feeds; monitor and assess impacts; avoid conflicts with fishing and other ocean uses; protect wildlife; and repair damage to the marine environment.

The California legislation is supported by more than 30 business, conservation, and fishing organizations, most of which oppose the weak standards in S. 1195 developed with little stakeholder buy-in. Fish farming operations in federal waters (3 to 200 miles offshore) will affect state ocean resources. Congress and the Bush administration should follow the lead of California and states like Alaska, and go back to the drawing board to develop strong and responsible standards for a sustainable fish farming industry to protect our declining ocean health.

TIM EICHENBERG

Director

Pacific Regional Office

The Ocean Conservancy

San Francisco, California


The future of aquaculture most certainly lies in the offshore environment, and Rosamond L. Naylor has rightly identified the potential risks of this new technology. Offshore aquaculture won’t have a bleak future if the lessons learned from coastal net-pen farming of salmon are broadly applied and if sufficient precaution is used to direct the development of this nascent industry. It is far easier to establish and apply rigorous environmental safeguards now, before environmental degradation is evident, than it is to attempt to modify the industry once it is fully capitalized.

I propose that we set our expectations high for offshore aquaculture. All the major environmental issues must be adequately addressed, including the use of wild fish for feed, escapes, disease amplification and transmission, and nutrient impacts. Government policy should promote the use of trimmings from sustainable food-grade fisheries (and other innovative measures) to reduce the reliance of aquaculture on distant pelagic ecosystems for feed inputs. Sound policy should also ban the use of genetically modified organisms and require the farming of native species, while maintaining strict genetic parity between farmed fish and wild stocks, for escapes will inevitably occur. Past experience suggests that fish will likely need to be grown at densities below optimal commercial densities to reduce the probability of disease amplification in these inherently open systems. Vaccines against disease must be developed quickly to eliminate the use of antibiotics and other therapeutic treatments. Comprehensive ecosystem studies must be commissioned by the federal government and funded by industry to quantify and adequately limit the cumulative impact of offshore farms as the industry expands.

As Naylor rightly states, the United States has the opportunity to be a global model of sustainable fish production. There is a growing constituency of consumers and businesses eager to purchase sustainably caught and farmed seafood. Over the past six years, Seafood Watch at the Monterey Bay Aquarium (www.seafoodwatch.org) has distributed over 7 million pocket guides that identify these sustainable choices. These science-based tools are raising awareness and changing consumer’s purchasing habitats. Seafood businesses (including the nation’s largest retailer, Wal-mart) have heard the message, and they too are reforming their purchasing policies toward more sustainable seafood. Should the National Oceanic and Atmospheric Administration and the federal government develop and enforce strong environmental standards for offshore aquaculture, a large and growing body of consumers will be willing to reward these businesses in the global marketplace. Should these standards be weak and should environmental problems ensue, con-sumer support for this new industry will rapidly dissolve.

GEORGE H. LEONARD

Science Manager

Seafood Watch–Monterey Bay Aquarium

Monterey, California


Genetic testing

The subject of “Federal Neglect: Regulation of Genetic Testing,” by Gail H. Javitt and Kathy Hudson (Issues, Spring 2006) is clearly an important one as advances in genetics and genomics contribute to the development of new genetic tests and services, driven by physician and patient demand for these diagnostic services. The number of genetic tests available is rising as the technology develops, as is the significant contribution that these tests make to individualized personal health care. Genetic tests can detect disease early, before the onset of symptoms, when treatment can be more effective; and can specifically target therapy for existing disease that is more effective for the individual patient. It must be the shared objective of all of us to ensure that patients obtain the full benefit of this revolutionary technology, to continue to encourage innovation, and to regulate in ways that are appropriate and do not create barriers to either patient access or innovation.

I must take exception to the article’s alarmist premise that federal “neglect” in the oversight of genetic tests “represents a real threat to public health.” In fact, the regulatory gap referred to by the author is illusory. The clinical laboratory industry is one of the most highly regulated health care–delivery sectors. All clinical lab services are regulated under the Clinical Laboratory Improvement Amendments of 1988 (CLIA). CLIA regulations are documented by hundreds of pages of specific and general requirements for laboratory quality, such as obligations to have appropriately trained personnel, establish quality-control programs, and engage in proficiency testing, which all apply to laboratories performing genetic testing. More important, CLIA regulations require that before a laboratory introduces a new method or test that does not use a commercially available test kit, it must establish and document the performance specifications for accuracy, precision, analytical sensitivity, analytical specificity, and quality of results for patient care: all essential parameters of quality and performance. These requirements are enforced by onsite inspections every two years. The penalties for noncompliance are severe and can lead to the revocation of the lab’s CLIA certificate, without which a lab is unable to provide services.

The record indicates that laboratory tests are accurate and reliable and provide information relevant to the patient and his or her doctor. It is important that any proposed regulatory changes focus on risks specifically related to genetic tests and those that are targeted, realistic, and effective to meet any legitimate issues or risks. It is incumbent on all of us to work together to identify specific concerns and regulatory responses that will address those concerns without creating undue burdens that will stifle innovation or restrict patient access to new technology.

DAVID A. MONGILLO

Vice President for Policy and Medical Affairs

American Clinical Laboratory Association

Washington, DC


Taxes and highways

As the U.S. Chamber of Commerce Foundation concluded in its November 2005 report, the only real problem facing the gas tax at present is that it has been 13 years since it was last adjusted. Since 1993, it has lost 30% of its purchasing power. Recent spikes in the price of steel, concrete, asphalt, and petroleum compound this problem. What may force Congress to take action, is the fact, confirmed by the president’s 2007 budget, that the spending authorized by the federal transportation bill will exhaust Highway Account reserves and confront the Highway Trust Fund with insolvency in just three years. Industry observers have concluded that this crisis will either force Congress to adjust fuel taxes come 2009, or funding for the federal highway and transit programs as we have known it will collapse.

Meanwhile, the Highway, Bridges and Transit Conditions and Performance Report released by U.S. Department of Transportation (DOT) in March 2006 shows that annual highway capital spending needs to ramp up from the $68.2 billion invested by federal, state, and local governments in 2004 to $118.9 billion to meet national needs. AASHTO puts this figure at $125.6 billion, but the overall conclusion is the same: The United States needs to increase highway investment by 70 to 80% to keep us competitive abroad and meet mobility needs at home.

That sounds like an enormous challenge, but recent history shows that achieving it is possible. Between 1981 and 2004, highway spending for capital, maintenance, administration, and debt service increased from $42.5 billion to $145 billion. Over that period, federal funding increased from $12 billion to $31 billion, while state and local investment increased from $30.5 billion to $114 billion. Leaders at all three levels demonstrated the political will to produce the resources needed.

In “For What the Tolls Pay: Fair and Efficient Highway Charges” (Issues, Spring 2006), Rudolph G. Penner accurately describes the difficulty ahead in convincing Congress “to return to its historical practice of occasionally raising the fuel tax when the federal highway program is reauthorized,” as it did in 1982, 1990, and 1993. What he does not mention is that the political difficulty of increasing revenues at the state and local levels will be just as great as that at the federal level. Medical costs funded by states have been growing at 11% annually and are expected to continue to grow. This may crowd out what states can put into transportation from any but dedicated transportation revenues. The good news is that in the 2004 elections, 76% of transportation measures on the ballot across the nation passed. In 2005, a $2.6 billion bond measure for transportation passed in New York, and voters in Washington state by a 53% margin voted to reject the repeal of a 9.5-cent gas tax hike approved by its legislature earlier that year.

The long and short of it is that if we are to ramp up annual highway capital investment to the level of over $118 billion that the U.S. DOT in March 2006 stated is necessary, all levels of government—federal, state, and local— will have to continue to fund their share of the increase. Today’s federal share of 45% will have to be sustained. The federal government can help state and local governments to fund their 55% share by removing obstacles to bond financing and tolling and by encouraging public/private ventures. A total of 90% of federal Highway Trust Fund revenues come from fuel taxes and the balance from truck fees. To sustain their own vital contribution to highways and transit in 2009 and beyond, Congress may have to make some necessary but courageous decisions.

JOHN HORSLEY

Executive Director

American Association of State Highway and Transportation Officials

Washington, DC


Some in the transportation community have criticized the Transportation Research Board (TRB) committee headed by Rudolph G. Penner for not calling for a large increase in highway investment. Though committee members were of mixed views on whether doing so would be wise, we decided early on that the level of highway investment was not what the committee had been asked to address.

However, our recommendations are certainly consistent with better targeting of highway investments (which I believe will lead to a net increase, for reasons explained below). As Penner points out, despite the merits of the user-pay principle still embedded in highway finance, the politics of centralized administration and the political distribution of highway funds lead to wasteful spending—not merely “bridges to nowhere” and other earmarks, but numerous projects done simply to make sure that every congressional district gets its share. It’s not surprising that aggregate figures show declining rates of return on highway investment.

One unanticipated but tragic consequence of this system of resource allocation is that large “lumpy” projects, such as adding high-productivity toll truck lanes to intercity highways and adding value-priced networks of congestion-relief lanes to urban freeways, seldom get funded. And this at a time when trucks carry 90% (by value) of all goods and major interstates are both wearing out and short of lanes, and when motorists waste $63 billion a year stuck in urban traffic congestion.

If public policy facilitates the use of toll finance (to address the funding problem) and public/private partnerships (to foster innovation), as the report recommends, many such projects are likely to be implemented over the next 20 to 25 years. And that will provide a wealth of experience for all transportation stakeholders in the use of per-mile charging for highway finance.

Thus, by the time the nation has to begin a serious transition from fuel taxes to per-mile charging, the users, providers, and policymakers will be dealing with familiar concepts rather than having to debate a leap into the unknown.

ROBERT W. POOLE JR.

Director of Transportation Studies

Reason Foundation

Los Angeles, CA

Robert W. Poole Jr. was a member of the TRB committee that produced the report on the long-term viability of fuel taxes for highway funding.


Rudolph G. Penner did a great job of making sense of the several challenges confronting our surface transportation system and the many solutions that have been proposed to address them. I can’t disagree with any of his key points, and commend him for avoiding the alarmist rhetoric common to many in this business, specifically as it relates to the adequacy of the revenues likely to be generated by the fuel taxes that now fund a big chunk of federal and state road and transit projects. Nonetheless, he does make the case that more revenues will be needed in the future to accommodate expected growth in population and economic activity, and that we should begin thinking about revenue alternatives (or supplements) to the state and federal fuel taxes.

Anyone familiar with surface transportation trends is also familiar with the key problems confronting the system. Chief among them is the absence of any meaningful increase in capacity over the past few decades, while usage and the number of users rise at a rapid clip. Rep. Don Young said it best when he observed a few years ago that while the numbers of licensed drivers (up 71%), registered vehicles (up 99%), and miles driven (up 148% percent) have all soared since 1970, “new road miles have increased by only 6 percent.” That’s it? Six percent? This is an astounding indictment of a federal program that has spent (in inflation-adjusted dollars) more than $700 billion in taxpayer money since 1970 on top of an even larger amount spent by the 50 state transportation departments over the same period.

Chief among the reasons for this pathetic state of affairs is that the federal highway program morphed into an all-purpose spending program once the interstate system was completed in the early 1980s. The recently enacted transportation act—one of the worst pieces of legislation ever passed by any Congress—will apply only about 60% of motorist fuel tax revenues to general-purpose roads. As a result, adding more revenues to the system will only serve to inflame Congress’s increasingly bizarre obsession with wasting transportation money at the expense of any improvement in mobility.

Take the federal transit program: Although as much as 25% of federal trust fund dollars go to transit, nationwide only 2% of surface passengers use it, a figure that falls to just 1% when the New York metropolitan area is excluded. Indeed, 75% of transit ridership occurs in just seven metropolitan areas, thereby forcing motorists throughout the nation to support a tiny fraction of commuters in a nationwide application of trickle-up economics. The new transportation act continues this trend toward more and more diversions by creating new federal responsibilities covering sidewalks, truck parking lots, and the dissemination of information about bicycles. Until the federal surface transportation program is terminated, as Sen. Jim DeMint’s new legislation would allow, taxpaying motorists should “just say no” to any more revenue-raising schemes.

RON UTT

Senior Research Fellow

Heritage Foundation

Washington, DC


I read with great interest Rudolph G. Penner’s essay on funding future highway improvements in the United States.

Although Congress and our state legislatures indeed have a multitude of scenarios, considerations, options, and issues to ponder, one abiding, overwhelming constant remains: a pressing need. In my own state of Georgia, our 20-year statewide transportation plan for federal and state highways will require $55 billion just to maintain current levels of service, yet we project only $36 billion in revenues from existing funding mechanisms. Nationally, effectively maintaining our collective transportation infrastructure requires $91 billion a year; yet only $74 billion is appropriated.

These disparities underscore the need for elected leaders and transportation planners to carefully and expeditiously examine transportation funding; not simply the efficacy of federal and state motor fuel taxes, but also concepts such as managed lanes, public/private partnerships, and frankly, any other option we can think of that might help close this gap.

HAROLD LINNENKOHL

Commissioner, Georgia Department of Transportation

President, American Association of State Highway and Transportation Officials

Atlanta, GA


Ethanol futures

The centerpiece of a stronger national commitment to expanding domestic ethanol production must be sustained and adequate federal funding for research, development, and demonstration (RD&D) of biorefineries and related technologies. Federal RD&D partnerships with both the forest-products and agriculture industries are vital to accelerating development of the U.S. domestic lignocellulosic ethanol industry.

Adequate federal funding for RD&D could substantially influence the adoption of new technologies in the forest-products industry, which is poised to become a major contributor to the emerging ethanol industry. By deploying key fermentation and thermochemical technologies, our existing pulp and wood products mills could be converted into integrated biorefineries to produce ethanol and other biofuels in conjunction with our traditional product lines. We have much of the infrastructure and expertise—feedstock harvesting, transportation and storage; manufacturing and conversion infrastructure; waste handling and recovery— needed to rapidly implement biorefineries at a commercial scale. By and large, our mills are located in rural communities where important synergies with agricultural feedstocks, including energy crops, can be realized. Since the biorefineries would be installed at existing mills, many of the transportation and infrastructure costs that currently exist for ethanol production would be reduced or eliminated. Finally, the dispersed geographic location of our mills would enable ethanol production throughout the country, contributing to a more diversified and secure energy supply.

With on-again/off-again federal RD&D support, our industry finds it difficult to make progress in address-ing technical barriers in the core technologies needed to implement these biorefineries. Thus, our industry is tentatively encouraged by the president’s State of the Union announcement of additional funding to foster technology breakthroughs in cellulosic ethanol. The test will be whether this evolves into truly sustained funding priority, and thereby help to mitigate the risks faced by our early adopters.

A national policy that promotes ethanol imports, as suggested by Lester B. Lave and W. Michael Griffin (“Import Ethanol, Not Oil,” Issues, Spring 2006), could have a place as a transitional strategy to develop U.S. ethanol markets. We share their assessment that the total ethanol demand is such that both domestic production and imports could be accommodated. However, unless a policy encouraging imports is accompanied by a strong and realistic long-term commitment to federal RD&D investment in technology for domestic production, we may simply wind up replacing one foreign fuel dependency with another.

LORI A. PERINE

Executive Director

Agenda 2020 Technology Alliance

American Forest & Paper Association

Washington, D.C.


Protecting the West

From my perspective, “Protecting the Best of the West” by Wendy Vanasselt and Christian Layke (Issues, Spring 2006) appears to touch on many of the issues that beset the Bureau of Land Management (BLM). The BLM is an agency charged with multiple responsibilities but lacks both funding and staff to complete their myriad missions. However, noting that more funding and staff are needed doesn’t make it happen. More focus is needed on possible solutions, and I offer the following observations.

(1) Local people need to be actively lobbying for their public-lands agency. Organizations such as the Park Service on both the local and national levels have effective lobbying support whereas the constituencies for the BLM tend to be diverse and often focused on fighting with each other rather than helping the agency. People need to be visiting with their local newspapers and local congressional representatives. They need to be represented in Washington, DC, attend funding hearings, and carry the word as to what the BLM needs. For example, Land and Water Conservation Funds, which have been for years a keystone for ongoing BLM acquisition projects, have been cut by over 90% during the Bush administration, but I see little public outrage.

(2) The BLM needs to allow decisions to be made in a timely manner. Often, because of the various political and paperwork obstacles, nothing is done.

(3) We need to be able to reward the hardworking, competent, and committed members of the BLM staff. Equally, we need to be able to dump the deadwood. People who are barely competent often appear to do as well in the agency as those who are truly dedicated and effective.

(4) Partnerships, partnerships, partnerships, partnerships. A key focus question needs to consistently be who can partner with the BLM? In our area, there are partnerships with the U.S. Forest Service to co-manage lands and co-mingle staff, partnerships with local cities to provide funding for law enforcement needs, partnerships with trail-user groups, and partnerships with conservation organizations to lead in BLM land acquisitions. Partnerships are possible, but they have to be a central focus for the agency.

卌 Find ways to allow staff to remain beyond the normal rotation of duty. Often people who have finally established effective partnerships with the locals find themselves sent off to another part of the United States in order to continue to advance their careers and increase their meager salaries.

The BLM has always been the stepchild of the resource agencies, allowed to protect “what’s left of the west” that wasn’t wanted by other agencies. The BLM needs to have our constant attention and care, and we need to view it as an organization that we welcome with pride in our local communities.

BUFORD A. CRITES

Member, City Council

Palm Desert, California

From the Hill – Summer 2006

Regulatory regime for greenhouse gases discussed

The Senate Energy and Natural Resources Committee held an all-day conference on April 4 on the issues involved in creating a program to regulate the greenhouse gases that cause climate change. Among other issues, the committee examined whether limits should be imposed on particular industries or on all industry and how, as part of a trading regime, emission allowances would be allocated for companies that exceed mandated targets.

Although the conference was originally expected to lead to legislation, Sens. Pete Domenici (R-NM) and Jeff Bingaman (D-NM), chair and ranking member of the committee, respectively, have said that they do not expect to introduce a bill during the current session. “Designing and implementing a mandatory system will be very difficult, both politically and economically,” Domenici said at the outset of the conference. “But we need to start somewhere. This conference is our starting point.’’

During the meeting, a consensus emerged among most participants, including many from the electric industry and other industrial sectors, that if a mandatory economy-wide system is to be imposed, it should be done soon.

“Customers and shareholders need greater certainty,’’ said Ruth Shaw, president of Duke Energy’s nuclear subsidiary. Duke is currently evaluating how it will spend billions of dollars to provide power to its growing customer base during the next 50 years, Shaw said, and wants to know what future carbon limits will be.

All participants agreed on the pivotal role of new technology. Most witnesses argued that a mandatory system would create incentives for faster adoption of technology than would a purely voluntary approach. Those opposed to a mandatory system, including Southern Company, testified that they would adopt new low-carbon technologies even if no mandatory regulation was imposed.

Many companies advocated that legislation—whether it involves a carbon tax or a cap-and-trade system—be phased in, becoming more stringent over time. They argued that in a cap-and-trade system, the initial allocation of allowances to emit carbon emissions should essentially be free, with the auctioning of allowances permitted as technology develops. Participants also discussed who would receive permits and how to use permits as a way to distribute the costs associated with reducing greenhouse gas emissions.

The point at which emissions would be regulated was a subject of debate, although some said that this would be primarily an administrative decision that would not affect the effectiveness of the program. A majority of witnesses agreed that regulating emissions upstream at the level of fuel producers would be most effective, because it would send price signals throughout the economy. Regulating downstream, at the point of carbon emitters, would require regulating millions of small companies and individuals. A hybrid approach, which would regulate small sources upstream and larger sources at the plant level, could achieve additional reductions.

Businesses tended to agree on the need for a safety valve, such a ceiling on emissions credit prices. They also thought it was important to receive credits for carbon offsets: projects that negate the impact of a company’s emissions by avoiding an equal amount of pollution, usually at another site, or by sequestering an equal amount of carbon.

Billy Pizer of Resources for the Future in Washington, DC, explained the importance of working within the constraints facing business, noting that “No mitigation benefits will arise if a policy cannot be enacted.”

Effectiveness of Project Bioshield examined

A hearing in early April of the House Energy and Commerce Committee’s Subcommittee on Health highlighted continuing congressional concerns that the country’s program to combat the possible use of biological, chemical, and radiological weapons still lacks a strategic plan, focuses too narrowly on possible acts of terrorism, and does not provide sufficient incentives for industry cooperation.

Congress created Project Bioshield two years ago in the wake of the release of anthrax on Capitol Hill and amid concerns that terrorists would seek to use such weapons in the future. The interagency program, funded at $5.6 billion over 10 years, was designed to accelerate R&D and the procurement of countermeasures against chemical, biological, radiological, and nuclear agents.

In testimony before the committee, Alex M. Azar II, a deputy director at the Department of Health and Human Services (HHS), stated that approximately $35.6 million had been awarded in research grants and contracts to date, and another $1.08 billion had been obligated for the procurement of vaccines to be stockpiled.

Azar said that in order for a countermeasure to qualify for Project Bioshield, it must have solid clinical experience and/or research data must support that it could eventually qualify for Food and Drug Administration (FDA) approval within eight years. He noted that the 2004 act that created the program also states that “no payment shall be made until delivery has been made of a portion, acceptable to the secretary, of the total number of units contracted for.” He emphasized that “it is estimated that the cost of developing and bringing to market a new drug is between $800 million and $1.7 billion.” Thus, the significant investments that must be made before a countermeasure would be eligible for Project Bioshield funding require a substantial risk to industry, especially small businesses.

In addition, Azar argued, liability protection for industry remains a major source of concern, especially in an emergency when the government must procure a vaccine that has not received FDA approval. Under questioning from subcommittee members, Azar acknowledged that liability issues had yet impeded the agency from obtaining any countermeasures. Rep. Michael Burgess (R-TX), a medical doctor, argued that the government’s focus on industry liability while ignoring the health risk to the general public was shortsighted.“The public doesn’t understand why they don’t have [legal] protection against an [untested] vaccine,” he stated.

Subcommittee chairman Rep. Nathan Deal (R-GA) and Rep. Barbara Cubin (R-WY) expressed concern that Project Bioshield was restricted to intentional acts of terror and not naturally occurring infectious diseases, such as the growing threat of an avian flu pandemic. Cubin argued that a human pandemic poses an equal threat to national security. Azar stated that the government’s legal counsel was analyzing whether the H5N1 virus would be eligible for funds.

Reps. John Shimkus (R-IL) and Anna Eshoo (D-CA) meanwhile sharply questioned Azar on what they perceived to be inertia on the part of the agency. “I think what’s lacking in all this is a real sense of urgency,” Rep. Eshoo said. Azar conceded that no strategic plan exists, which has impeded the private sector’s ability to anticipate government needs.

Deal and Shimkus argued that a centralized government agency, rather than the current interagency model, is required to make the program effective.

Sen. Richard Burr (R-NC) recently reintroduced legislation that would revise Project Bioshield by strengthening coordination and providing added incentives for private-sector investment. The Burr bill (S. 2564) would establish a Biomedical Advanced Research and Development Agency (BARDA) within HHS to coordinate and oversee activities that support and accelerate advanced R&D of “qualified” countermeasures. BARDA would be the single coordinating organization for implementing Project Bioshield. The bill would also establish a National Biodefense Advisory Board to provide advice and guidance to the secretary of HHS on the potential threats and opportunities presented by advances in the biological and life sciences.

An earlier version of Burr’s bill, the Biodefense and Pandemic Vaccine and Drug Development Act of 2005 (S. 1873), was heavily criticized because it proposed to bypass Freedom of Information Act (FOIA) requirements. It would have allowed either the secretary of HHS or the director of BARDA to conduct meetings and consultations in a closed setting. It also would have provided full exemption to BARDA from having to comply with FOIA. Critics complained that the exemption was too broad and noted that even intelligence agencies lacked such an expansive exemption.

Senate bill would raise H-1B visa quotas

Shortly before departing for the Memorial Day recess, the Senate passed a comprehensive immigration bill (S. 2611) that would raise the existing H1B non-immigrant visa quota from the current 65,000 to 115,000 annually and increase that number by 20% if the cap is reached.

The H-1B visa is a guest worker program targeted primarily at high-tech professionals. Increasing the number of visas issued each year has been a goal of information technology and engineering companies that rely on foreign nationals to meet existing shortages.

Although interest in H-1B visas has waxed and waned on Capitol Hill over the years, the subject was given an added boost by the current wave of concern over our nation’s ability to compete in a global market. Reports such as the National Academies’ Rising Above the Gathering Storm include recommendations to reform immigration and visa processing.

Several proposed bills aimed at increasing U.S. economic competitiveness address the H-1B visa, although in slightly different ways. For example, the Protecting America’s Competitive Edge through Education and Research Act of 2006 (PACE-Education Act, S. 2198), written in response to the Academies’ report, includes language to increase the number of H-1B visas, but by a mere 10,000. It further restricts that allocation to “applicants with doctorate degrees in science or engineering from a United States university.”

The National Innovation Act (S. 2109), meanwhile, simply includes a sense of the Senate resolution that the United States should seek to retain science and technology (S&T) talent through H-1B visa or other programs. It too recommends that preference be given to those who have received an advanced degree from a U.S. university.

Immigration reform, however, is a complex subject that elicits a spectrum of opinions from all sectors, and the H1B visa is no exception. Many foreign nationals who initially came to the United States on an H-1B have used their time here to obtain a green card and ultimately to become citizens. Although the private sector eagerly seeks these professionals, not all of the S&T community is as eager to have the number of visas increased. For example, the electrical engineering community has fought against any efforts to increase the existing H-1B caps, arguing that sufficient talent already exists in the United States and pointing to the unemployment rates of computer programmers to support their position.

The current immigration controversy is spawning creative visa reform programs to expand the U.S. technical talent pool. The Comprehensive Immigration Reform Act of 2006 introduced by Senate Majority Leader Bill Frist (RTN) would create a new F-4 visa category for foreign nationals pursuing advanced degrees in science, technology, engineering, mathematics, and related fields at U.S. colleges and universities. Furthermore, it would automatically extend for one year a student visa for foreign nationals “who receive doctorates or the equivalent in science, technology, engineering, mathematics, or other fields of national need at qualified United States institutions” in order to facilitate their ability to seek employment in the United States.

The PACE-Education Act, which currently has more than 60 cosponsors, outlines similar measures to ease the ability of highly skilled foreign students graduating in the United States to move into jobs and smooth the path toward legal permanent-resident status. The rationale behind the provision is that advanced-degree students who first must leave the country after graduating may be less tempted to return for employment opportunities if they must first reapply for another set of visas.

Another recommendation included in both the PACE-Education and immigration reform legislation is to exempt “aliens who have earned an advanced degree in science, technology, engineering, or math and have been working in a related field” from annual numerical limitations that are placed on employment-based immigrants. The exemption would also extend to the individual’s spouse and children.

The Senate immigration bill appears to be the vehicle for change for visa reform for the S&T community. Unfortunately, it must now be reconciled with the much more narrowly focused House bill, the Border Security and Immigration Reform Act (H.R. 4437), which deals almost entirely with enforcement. It will be a challenge to achieve a compromise between the two dissimilar visions, given the limited number of days remaining in the congressional session.

Congress attempts to rein in earmarks

In the wake of recent lobbying scandals, leaders in both chambers of Congress are attempting to control the number of earmarks introduced into bills after conference. The habit of inserting earmarks late in the legislative game has increased during the past few years, as individual representatives insert provisions for pet projects for their districts into must-pass legislation. The R&D budgets of a number of agencies have not been immune from this practice.

On March 29, the Senate passed (by a vote of 90 to 8) the Legislative Transparency and Accountability Act of 2006 (S. 2349), a lobbying reform bill that would place restrictions on earmarks. The bill, introduced by Sen. Trent Lott (R-MS), requires that any earmark attached to a bill in a conference report after the legislation has already passed the chamber will be subject to a point of order. This means that the earmark is subject to floor debate unless the sponsor is able to obtain 60 votes to counter the point of order.

The House also weighed in on the subject with H.R. 4975, the Lobbying Accountability and Transparency Act of 2006. The bill, which has been reported out of the House Judiciary and Government Reform Committees, would require that earmarks in appropriations bills include the sponsor’s name. It would also make earmarks introduced in conference reports out of order.

Although some R&D agencies, notably the National Institutes of Health and the National Science Foundation, have stayed free of earmarks, other agencies are finding that they are becoming exceedingly prevalent.

In fiscal year (FY) 2006, R&D earmarks set a new record, climbing to $2.4 billion, a 13% increase over the previous year. Five government organizations—the U.S. Department of Agriculture (USDA), National Aeronautics and Space Administration, Department of Energy (DOE), National Oceanic and Atmospheric Administration (NOAA), and Department of Defense—receive 94% of the R&D earmarks. For DOE and NOAA R&D, as well as extramural agricultural research at USDA, earmarks make up more than one out of every five dollars.

The increases in earmarking among federal R&D agencies have coincided with declining R&D budgets. In FY 2006, the overall federal investment in R&D grew just 1.7%, compared to the 13% surge in earmarked dollars, forcing agencies to cut into their core competitive programs to accommodate the appropriations. Thus, if earmarks are curtailed, the spending power of some agencies could increase even in a constant budget.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Pentagon’s Defense Review: Not Ready for Prime Time

The trouble with our times is that the future is not what it used to be. This witty observation by 19th century French poet Paul Valery captures precisely the ever-changing nature of today’s global security environment. Consider the events and changes since 2001: the terrorist attacks on New York and Washington, the wars in Afghanistan and Iraq, the U.S. commitment to a long-term campaign against radical Islamism, the threat of increased nuclear proliferation, and the continued growth of Chinese military capabilities along disturbing lines. This was the environment confronted by Defense Secretary Donald Rumsfeld and defense planners in preparing the Pentagon’s Quadrennial Defense Review (QDR), which is required by law every fours years and which was recently submitted to Congress.

The QDR has four key tasks. The first is to determine what major challenges the United States may have to confront during the next 20 years. The second is to present a strategy for meeting these challenges. Then the QDR must assess whether the force structure and defense program proposed by the Department of Defense (DOD) are consistent with the diagnosis of the threats and the strategy proposed for addressing them. Finally, the QDR has to estimate the level of resources necessary to implement the strategy. In brief, the QDR assesses not only what needs to be done to ensure the security of the United States today but what also must be done to prepare for threats that lie along the misty horizon of America’s future.

The verdict on how well the Pentagon succeeded in these tasks is unfortunately mixed. Although the new QDR provides an accurate diagnosis of the military challenges facing the United States and attempts to find strategies to address them, it fails to follow through on two essential actions: realigning the military to defend the United States from new, nontraditional threats and reordering funding priorities to meet those threats. These shortcomings must be remedied soon, for as Francis Bacon observed, “He who will not adopt new remedies must expect new evils.”

Three enduring challenges

The report is successful in fulfilling the first objective. It pinpoints three major enduring security challenges to the United States: radical Islamist insurgencies, nuclear proliferation, and China’s growing global stature and ambitions. Radical Islamists are attempting to advance their aims through terrorism, insurgency, economic and political disruption, and propaganda. Unstable governments with anti-American sentiments are investing heavily in nuclear arsenals to improve their international standing and give them leverage against the conventional military capabilities of the United States and its allies. China is diligently developing its conventional military capabilities, including ballistic missiles, information warfare, antisatellite weaponry, submarines, and high-speed cruise missiles, which could allow it to counter U.S. military strengths in the air, at sea, and in outer space as well as intimidate U.S. allies and friends in Japan, South Korea, and Taiwan.

As diverse as these challenges may seem, they share an important characteristic: Rather than confronting the United States in a conventional military manner, they pose nontraditional asymmetric threats. This response is hardly surprising. Given the undisputed military superiority of the United States, no current or prospective enemy would be so foolhardy as to take on the U.S. military directly.

Radical Islamists are exploiting the asymmetric advantage of terrorism primarily because it is the only form of warfare currently available to them. The mission of their transnational theologically based movement is to overthrow what they consider to be illegitimate and often pro-U.S. regimes and to eliminate U.S. influence in the Muslim world. The leaders of this insurgency are exploiting advanced technologies and modern trends, including globalization, expanded financial networks, the Internet, and increasingly porous borders, to extend their global reach and influence.

Radical Islamists feel no obligation to abide by the rules of traditional warfare and the dictates of international conventions or to spare the lives of innocents. Their willingness to employ weapons of mass destruction (WMD) and disruption makes them especially threatening. Their decentralized organizational structures and theologically based messages provide unique strengths and obscure centers of gravity, leading to Rumsfeld’s conclusion that the war against radical Islam will be “a long, hard slog.”

Nuclear proliferation in Asia is a second major enduring challenge to U.S. security. Since 1998, India and Pakistan have tested nuclear weapons and created nuclear arsenals. North Korea apparently has nuclear weapons and is producing the fissile material necessary to fabricate more of them. Iran, undoubtedly aware of the very different treatment accorded to a nuclear North Korea relative to Saddam Hussein’s nonnuclear Iraq, is vigorously pressing forward with its nuclear weapons program. It is conceivable that before the decade is out, a solid front of nuclear states may stretch from the Persian Gulf to the Sea of Japan, through a part of the world that is increasingly important to U.S. security and economic well-being.

The consequences of the rise of this atomic arc of instability will be profound. The most important implication of the proliferation of nuclear-armed states is the increase in the likelihood that these weapons will be used. It is unclear whether Iran, North Korea, and Pakistan, whose cultures and political systems are profoundly different from our own, will share the U.S. view that nuclear weapons are weapons of last resort.

Another major challenge that nuclear proliferation poses is the dramatic change in the global balance of power that will undoubtedly follow in its wake. The United States will not be able to influence nuclear-armed adversaries in the same way as it engages nonnuclear states. The array of political and diplomatic instruments of power, as well as military options available to the United States vis-à-vis rogue states armed with nuclear weapons, will be starkly reduced. This seems to be a principal motive for North Korea and Iran in their quests for nuclear weapons.

Proliferation begets proliferation. It is conceivable that nuclear weapons could fall into the hands of nonstate entities as a consequence of corruption or state failure. Nor can one discount the possibility that a state such as North Korea, which sells ballistic missile technology, or Pakistan, whose top nuclear scientist ran a nuclear-weapons production materials bazaar, would provide, for a price, nuclear weapons or fissile material to other states and nonstate groups.

The diffusion of nuclear materials marks the beginning of a second nuclear era. The first era, which began with the attacks on Hiroshima and Nagasaki in 1945 and ended in the 1990s, was characterized by a few established powers possessing nuclear weapons and observing a tradition of nonuse of these weapons. Now the former characteristic no longer holds, and the latter is open to debate.

China’s rise to great regional power status and over time to global status is the third principal and enduring challenge to U.S. security. To date, many discussions of China’s disposition paint it in stark terms: as either a threat that must be addressed along the lines of the Soviet Union, or as a state that simply wants to be acknowledged as a great power and fully incorporated into the global economy and international community.

The truth probably lies somewhere in between these gloomy and rosy poles. China does not represent the type of threat posed by the Soviet Union. Unlike Soviet Russia, China is not wedded to an aggressive expansionist ideology. Whereas the United States had no significant commercial relationship with the Soviet Union, it has enormous economic ties with China. Moreover, both the United States and China may have important common security interests in limiting the proliferation of WMD and combating radical Islamists.

However, China could emerge as a major threat to U.S. security in the manner of Germany against Britain a century ago. Like Germany in the late 19th and early 20th centuries, China is a rapidly rising power. The regime in Beijing is confronted by challenges to its political legitimacy; growing ecological problems; an economy that has enjoyed remarkable growth but may be entering a more mature period characterized by slower growth; serious demographic problems that could induce societal instability; a rapidly growing dependence on foreign energy supplies; and outstanding security issues over Taiwan, the Spratley Islands, Tibet, and perhaps portions of the Russian Far East. These strains could lead to friction between Washington and Beijing.

There is some evidence that China seeks to displace the United States as the principal military power in East Asia and to establish itself as the region’s hegemonic power. If this were to occur naturally, stemming from the evolution of Chinese economic power and a corresponding increase in influence, the United States would probably accept such an outcome. However, if Chinese preeminence were achieved through coercion or aggression, this would serve neither U.S. interests in the region nor the stability of the international system and the rule of law.

The ways in which China may challenge U.S. forces will likely be quite different from those of other U.S. adversaries in the post–Cold War world. The scale of military effort that China can generate far exceeds that of any rogue state. China’s ability to deny military access to its territory is far more advanced than that of any existing or likely potential U.S. rival. China’s enormous landmass provides it with great strategic depth, a problem that U.S. defense planners have not had to address since the Cold War.

The challenge for the U.S. military today is to adapt its forces to confront the more novel forms of Chinese military power. The United States needs to create and maintain a military balance in East Asia that is favorable to itself and its allies and guards against contingencies that might tempt the Chinese to act coercively or aggressively. The United States should also encourage China to cooperate in areas where the two countries have common security interests and convince Beijing that outstanding geopolitical issues should be resolved according to accepted international legal norms.

Lack of vision

The QDR offers a reasonably clear vision of how DOD intends to prosecute the war in which it is now engaged: the war against radical Islamists. On China, which is euphemistically described as a country at a Òstrategic crossroads,Ó the QDRÕs strategy is much less clear.And the QDR is even less clear on how the United States will address the problem of nuclear rogue states or the failure of nuclear-armed states.

In the case of radical Islam, the approach is generally active and aggressive, reflecting a belief that the defense of the U.S. homeland is best assured by engaging the enemy as far from U.S. shores as possible and by keeping up the pressure on radical Islamist organizations, thus leaving them little time to organize and plan future attacks, let alone carry them out. The military strategy envisions U.S. forces, in combination with those of friends and allies, working to break down radical Islamist terrorist cells within friendly states. It also calls for surveillance of failed and ungovernable states and taking quick and decisive action once terrorist cells are identified.

Hence the QDR emphasizes highly distributed special operations forces, either working in tandem with similar indigenous or allied forces to defeat terrorist groups or preparing to act quickly on their own if such help is not available. It also emphasizes developing partnerships and leveraging that capacity as a means of expanding the capability needed to defeat radical Islamists, especially those waging insurgencies in Afghanistan and Iraq.

The report acknowledges that China is developing a worrisome set of military capabilities and is likely to continue making large investments in high-end asymmetric military capabilities, or what some military writers refer to as the ÒassassinÕs maceÓ set of capabilities. These include electronic and cyber warfare; counter-space operations; ballistic and cruise missiles; advanced integrated air defense systems; next-generation torpedoes; advanced submarines; strategic nuclear strike capability from modern, sophisticated land- and seabased systems; and theater unmanned aerial vehicles.

The QDR asserts that the Pentagon will pursue investments that Òpreserve U.S. freedom of actionÓ and Òprovide future Presidents with an expanded set of optionsÓ for addressing the potential Chinese threat. It does not, however, explain how China might use its capabilities to threaten U.S. security interests and freedom of action.Nor does it present a plan for how U.S. investments would enable the military to dissuade, deter, or defend against such efforts.

It seems likely that DOD’s decision to accelerate the development of a new long-range strike aircraft is intended to convince the Chinese that they cannot use their country’s strategic depth as a sanctuary for key military capabilities such as ballistic missiles, land-based antisatellite systems, and command and control centers. But this is mere speculation. Understanding how the interaction of Chinese and U.S. capabilities will preserve stability in the Far East would help Congress immensely in its attempt to make informed decisions related to DOD’s force posture and investment priorities. Unfortunately, the QDR is all but silent on this matter.

In the area of nuclear proliferation, the QDR notes that “The United States must be prepared to deter attacks; locate, tag and track WMD materials; act in cases where a state that possesses WMD loses control of its weapons, especially nuclear devices; detect WMD across all domains . . . and eliminate WMD materials in peacetime, during combat, and after conflicts. The United States must be prepared to respond . . . [and] employ force if necessary,…[to include] WMD elimination operations that locate, characterize, secure, disable and/or destroy a state or non-state actor’s WMD capabilities and programs in a hostile or uncertain environment.” It is unclear, however, how the U.S. military will accomplish these missions, which are not hypothetical problems that may arise at some point in the distant future. They are today’s challenges.

Further on, the QDR candidly concedes that detecting fissile materials and neutralizing WMD devices are “particularly difficult operational and technical challenges.” Even collecting reliable intelligence on WMD programs and activities is judged “extremely difficult.” But the review offers little insight as to how the United States will address the WMD problem if these challenges cannot (as seems likely) be overcome. Nor does the QDR recommend investing much in the way of resources to address this problem.

Indeed, at present there appears to be little confidence that the United States can conduct preventive attacks to disarm North Korea or Iran of their nuclear materials production facilities or that it can quickly identify and secure Pakistan’s weapons in the event of a nuclear state failure there. Given the difficulties associated with taking preventive action against a country developing nuclear weapons, or of detecting, tracking, and intercepting those weapons in transit, the U.S. military may have to default to attempting to deter enemies from using WMD. However, this may be risky, because the United States has little understanding of the cost/benefit calculus of states such as Iran and North Korea, let alone nonstate entities such as al Qaeda, which seek to acquire such weapons. In the end, the QDR fails to provide a sense of how DOD will address this admittedly difficult challenge.

The QDR turns bipolar in its assessment of the resources necessary to implement the proposed strategy. It calls for a large-scale modernization effort in the coming years, the first in more than two decades. Yet it also proposes to reduce defense spending toward the end of this decade, in part by holding down personnel expenses. Given the current situation, in which even recent increases in benefits have failed to stem the decline in the quality of recruits entering the Army, such cuts are probably unrealistic. Although the QDR considers some nominal cuts in programs and personnel costs, the difficult budget choices that could mean real savings are passed on to future planners. Given the forecast by some experts that long-term funding for the defense program may be short by $50 billion a year and the Bush administration’s goal of cutting the federal budget deficit in half by 2009, it is highly unlikely that even the existing program could be executed, let alone the initiatives that address the new and emerging challenges to U.S. security.

The problem with legacy weapons

As suggested above, the proposed defense program does not address the existing and emerging threats to national security as well as it could or should. The saying “show me your budget priorities and I’ll show you your strategy” may be somewhat hyperbolic, but it contains a strong element of truth. Given the magnitude of the changes witnessed during the past four years, and with the prospect of more to come, it would make sense to expect major changes in U.S. military forces and equipment. Yet the list of projects identified as top priorities shows that the QDR leaves U.S. forces equipped primarily for traditional warfare.

Among the top-priority projects is the Army’s Future Combat System, estimated to cost nearly $150 billion. It was conceived to exploit information technologies to defeat enemy tank forces at a distance; however, none of our existing or prospective enemies are building a new version of Sad-dam Hussein’s Republican Guard armored force.

The Marine Corps’ V-22 aircraft, designed to hover like a helicopter and fly like a plane, has become so expensive that large-scale production is unlikely. Meanwhile, the Corps’ aging helicopter fleet that the V-22 is designed to replace is wearing out at an alarming rate, because of the high pace of operations in Iraq.

The Navy’s DD(X) destroyer, at roughly $4 billion a copy, is a firepower platform. Yet the naval challenge from China, if it comes, will be centered on its submarine force, a threat against which the DD(X) is irrelevant.

The Pentagon’s F-35 fighter program is by far the most expensive program in the defense budget, at more than $250 billion. The fighters are designed to sweep enemy aircraft from the skies and strike targets on the ground. But al Qaeda has no air force, and the most worrisome strike systems being fielded by China, North Korea, and Iran are ballistic missiles, not fighter aircraft.

Postponing important decisions on big-ticket programs diminishes the chance that cuts in those programs will be made. With the passing of each appropriations cycle, the legacy programs that the Pentagon is unwilling to scale back, or in some cases prudently terminate, increase their momentum, as they develop special interests and constituencies in the military, Congress, and the defense industry.

At the same time, other promising QDR initiatives that are actually strategy-relevant and would help our military meet new threats are at risk of being starved of funding before they hatch. Examples include a proposal to increase the number of Special Forces battalions, our most heavily deployed units in the war against radical Islamists; a new long-range strike aircraft designed to loiter for protracted periods over the battlefield, searching, for example, for terrorist activity in remote areas or missile launchers deep inside Iran or China; programs and forces to cope with the problem of detecting, tracking, and disabling WMD, especially nuclear weapons that enemies might attempt to smuggle into the United States; medical bioterror-threat countermeasures; replacing the aging air-tanker refueling fleet with new aircraft able to refuel reconnaissance and strike aircraft in flight; and increasing submarine production to send a clear signal to China that it cannot expect to threaten

U.S.
freedom of action in an area of vital interest or coerce
U.S.
friends and allies in East Asia.

How are we planning to conduct persistent extended searches for North Korean nuclear-tipped missiles emerging from their caves to launch an attack, or to deflect the efforts of China’s submarines, 10 years hence, to threaten our Navy’s ability to defend Taiwan from coercion or aggression? Which set of capabilities best addresses the principal challenges facing the United States, as identified by the QDR? Which systems would be most useful in tracking terrorists in remote areas of Africa and Central Asia, dealing with a destabilized Pakistan or Saudi Arabia (al Qaeda’s two principal targets), or thwarting radical Islamist attempts to smuggle a nuclear weapon into the United States?

Without a question, priority should be given to the nascent QDR initiatives that are currently underfunded or that have no funding mandate at all. Notably, most of the mission-relevant programs cost just a fraction of the more established programs whose principal focus is on traditional forms of warfare and therefore, as the QDR rightly notes, of progressively less relevance to our security.

Reorienting the military services

By identifying security challenges that are very different from the planning metrics that shaped much of the U.S. defense program since the Cold War’s end, the QDR implies that first-order adjustments must be made to main elements of our defense posture. For example, military operations during the past 15 years have demonstrated that when enemies challenge the United States in traditional warfare, as in the two Gulf Wars and the 1999 Balkans conflict, air power can play an important and perhaps dominant role. Although all four military services should maintain a significant residual capability for traditional warfare, the Army and Marine Corps should be able to shift more of their capabilities away from traditional warfare and toward other challenge areas than either the Air Force or the Navy.

In particular, the Army and Marine Corps should be reoriented to face the irregular challenges to U.S. security, emphasizing capabilities associated with foreign military assistance, including building up the capacities of our partners, special operations, counterinsurgency, counterterror manhunting, and human intelligence. The Air Force and Navy must focus more on addressing traditional and prospective disruptive challenges, placing primary emphasis on countering emerging anti-access and area-denial capabilities and threats to the global commons.

It seems likely that each of the four services has an important role to play in addressing direct catastrophic threats to the U.S. homeland. These include defense against ballistic and cruise missile attack; border control; defense against delivery of WMD through nontraditional means (for example, capabilities for identifying, tagging, and tracking these weapons); and consequence management.

In addition to rebalancing service forces and the capabilities needed to address irregular, catastrophic, and disruptive challenges to U.S. security, the military should undertake key institutional changes. The professional military education system needs to be refocused to emphasize the study of Asia, the Third World in general, and radical Islam and China in particular. DOD must also transform the training infrastructure to focus more on irregular, catastrophic, and disruptive challenges to U.S. security.

The military’s foreign area officer program needs to be expanded and enhanced. Intelligence operations should place much greater emphasis on human intelligence than in the recent past. Finally, just as officers had to become “physicsliterate” after the advent of nuclear weapons, today they need to become “biosciences-literate.”

A new defense industrial base strategy should be developed to foster innovation and address the possibility of significant equipment attrition. The Pentagon should develop more effective interagency relationships and relevant capabilities for dealing with irregular and catastrophic challenges to U.S. security.

Finally, with the rise of national security threats that are greater in scale and broader in scope than those confronted in the first decade after the Cold War, the United States needs capable allies and partners but for different types of missions and in different parts of the world. The administration should review the U.S. alliance portfolio, enhance selected old partnerships, and forge new ones, especially with large democratic Muslim states such as Indonesia and Turkey.

New Nukes

For the first time in decades, nuclear power is back on this country’s list of possible energy sources. New nuclear power plants are on the drawing board. Public opinion is shifting in favor of nuclear energy. Even some veteran antinuclear campaigners have begun talking up its environmental benefits. The Bush administration has been actively promoting the nuclear industry. But its latest policy initiative threatens to set back the nuclear revival.

Several trends have helped refocus national attention on the role of nuclear power in meeting the nation’s energy needs: intensifying concerns over global climate change, increasing natural gas prices, serious instabilities in the oil- and gas-rich regions of the world, and vigorous growth in domestic electricity demand. But despite generous new subsidies for nuclear construction, prospective investors in the new projects remain wary of the financial risks.

A nagging problem is the continuing uncertainty at the “back end” of the nuclear fuel cycle. Nuclear power generates electricity, but it also generates radioactive waste. This waste will remain hazardous for thousands of years. How can it be disposed of safely, with minimal risk to the lives and health of citizens today and in the future?

A public opinion survey, carried out as part of a recent Massachusetts Institute of Technology study on the future of nuclear power, revealed that nearly two out of three respondents believe that nuclear waste cannot be stored safely. That contrasts sharply with the broad consensus in the scientific and engineering community that disposing of high-level radioactive waste in mined geologic repositories can effectively isolate the waste from the biosphere for as long as it poses significant risks.

But the federal government’s decades-long track record on nuclear waste—not least its failure to meet contractual obligations to remove spent fuel from existing utility nuclear plant sites—does not inspire confidence. The drive to build a high-level waste repository at Yucca Mountain in Nevada has dominated federal fuel cycle policy for nearly two decades, to the exclusion of all other disposal options. Yet the much-delayed project still faces many obstacles.

The government remains strongly committed to the Yucca Mountain project. Growing doubts about current policy, however, suggest that a major rethink may be needed. Work at Yucca Mountain should continue. But if the project fails, an alternative will be required. And even if it eventually goes forward, it may not suffice if there is a major new commitment to nuclear power in the United States.

The safety and environmental performance of dry-cask technology for spent fuel storage has been demonstrated at more than 30 U.S. nuclear power plant sites, where casks of various designs have been in use for up to 20 years.

Is it possible to imagine a different policy, one that would engender confidence that nuclear waste will be disposed of safely and cost-effectively?

Requirements for a back-end policy go beyond successful waste disposal, important as that is. An effective policy must also contribute to the goals of controlling the proliferation of nuclear weapons; fighting terrorism; and minimizing health, safety, and environmental risks during the lengthy interval between the generation of waste and its final isolation; all this while keeping nuclear power economically competitive with other ways of making electricity. Moreover, the policy must be “scalable”; that is, it must be capable of accommodating significant expansion in the number of nuclear power plants.

In this year’s State of the Union address, President Bush announced a new nuclear power initiative, the Global Nuclear Energy Partnership (GNEP). If adopted, GNEP would constitute the biggest shift in U.S. nuclear fuel cycle policy in decades. According to the president, GNEP is intended to help nuclear power expand safely and economically both at home and in other nations, including developing nations, while minimizing the risk of nuclear weapons proliferation. The centerpiece of GNEP is a scheme to accelerate the introduction of new technologies for reprocessing and recycling spent nuclear power reactor fuel. The Bush administration claims that this scheme could eliminate the need for repositories other than Yucca Mountain, cut the duration of the waste disposal problem from hundreds of thousands of years to something much shorter, and use almost all the energy in uranium fuel. This is an appealing vision, but the reality is that GNEP is unlikely to achieve these goals and will also make nuclear power less competitive economically. The good news is that there is an alternative pathway that can lead to success.

The reprocessing solution

The president’s GNEP proposal also has several other elements. One is a plan to create an international consortium of leading nuclear nations that would guarantee fuel supplies and spent-fuel management services to nations that agree not to develop their own enrichment and reprocessing plants. In the GNEP vision, the new reprocessing and recycling technologies would be deployed only in the United States and other advanced nuclear nations. The United States abandoned reprocessing in the 1970s out of fear that it would increase the risk of nuclear weapons proliferation. Although some countries followed the same path, others, including France, the United Kingdom, and Japan, proceeded with reprocessing.

The big reprocessing plants at La Hague in France, Sellafield in the United Kingdom, and Rokkasho-Mura in Japan employ the plutonium uranium extraction (PUREX) process to extract most plutonium and uranium from spent fuel. Some recovered plutonium has been fabricated into mixed-oxide (MOX) fuel and recycled to commercial light-water reactors in Europe and Japan, but an estimated 100 tons of separated plutonium remains in storage (most of it at Sellafield and La Hague.) The waste stream from PUREX reprocessing contains most of the radioactive fission products in the spent fuel, along with small quantities of unextracted uranium and plutonium. The waste also contains radioactive isotopes of the transuranic elements neptunium, americium, and curium. These materials, formed during irradiation of fuel in the reactors, are collectively known as the minor actinides, to distinguish them from the larger quantities of uranium and plutonium also present in the fuel. The highly radioactive liquid waste is encapsulated in glass “logs,” which are destined eventually for high-level waste repositories. The French, British, and Japanese have been moving even more slowly than the Americans to dispose of the waste, and none has yet formally designated a repository site.

The Bush administration’s new reprocessing proposal is more ambitious than current practice. Most important, it seeks to extract and recycle all actinides in spent fuel: all uranium and plutonium, and all minor actinides. These materials account for much of the risk posed by the waste after the first several hundred years, by which time most of the fission product inventory will have decayed to harmless levels. Removing all actinides, major and minor, would facilitate waste disposal. In the GNEP scheme, the transuranic actinides would then be destroyed by recycling them to power reactors, where they would be fissioned and thus converted to shorter-lived products. Complete destruction of actinides is not feasible in conventional light-water power reactors, so specialized fast-neutron burner reactors (possibly a great many of them) must be built for this purpose. According to one estimate, it would take one 300-megawatt (MW) burner reactor to mop up the actinides discharged by three or four 1,000-MW light-water reactors. If so, the United States would need 25 to 30 of these reactors just to deal with the actinides from existing U.S. reactors. And since not even advanced burner reactors are capable of destroying all actinides in a single pass, spent burner reactor fuel would have to be reprocessed and recycled several times before the actinides could be reduced significantly.

The list of development needs for the GNEP plan is long. It includes a modification to the PUREX reprocessing process known as UREX+. PUREX currently separates plutonium in pure form. In the UREX+ modification, plutonium extracted from light-water reactor fuel would never be fully separated from its more radioactive actinide cousins. The idea is to preserve a barrier of radiation that would complicate unauthorized attempts to recover weapons-usable material. GNEP will also require the development of advanced burner reactor systems, new fuels and actinide targets for those reactors, fabrication methods for such materials, and new reprocessing technologies for reactor fuels.

This is a formidably expensive and long-term development program, and the administration has proposed to carry it out in cooperation with other advanced nuclear states. The GNEP initiative has been welcomed overseas, with the French, among others, declaring their satisfaction at having the United States finally back in the reprocessing fold. But neither the organizational nor the financial details of GNEP are yet clear. It remains to be seen how much of the development costs would actually be borne by others.

Why it won’t work

Even if the government can find the funds, GNEP is unlikely to succeed in meeting its goals. First, it will have little or no impact on the group of “first mover” nuclear power plants now in the planning stage. The full actinide recycle scheme envisaged by GNEP could not be deployed for decades— too far into the future to mitigate the uncertainty over spent fuel confronting prospective investors during the next few years, when decisions on whether to proceed with these projects will be made.

Would GNEP affect prospects for the Yucca Mountain project? Once again, the answer is probably not. Predicting the containment performance of the repository over hundreds of thousands of years, as the regulations require, will be enormously challenging. So, in principle, eliminating long-lived actinide isotopes from the waste could lighten the regulatory burden substantially. But much work remains to figure out whether developments envisaged by GNEP are feasible, work that will take far more time than the estimated 10 years it will take to license Yucca Mountain. During the licensing period, the possibility of achieving major reductions in the actinide inventory will remain just that, a possibility.

Even if these timing problems could be overcome, a significant technical problem would remain. The most optimistic advocates of actinide extraction and transmutation suggest that it will cut the time required for waste disposal to only a few hundred years. But no extraction scheme is perfect. Small quantities of long-lived actinides will inevitably find their way into the waste. Moreover, significant quantities of actinides are present in waste that has already been generated, much of it from defense programs, which is also scheduled for disposal at Yucca Mountain.

And actinides are not the only long-lived constituents of nuclear waste. A small number of fission products also have very long half-lives, notably technetium-99 (212,000 years) and iodine-129 (16 million years.) Some repository risk studies suggest that these isotopes would contribute more than most actinides to the radiation dose that could be received by the repository’s neighbors in the far future. Why go to the trouble of removing actinides from the waste if these fission product isotopes are still there? No credible scheme for separating and transmuting them has yet been proposed.

Thus, although GNEP promises significant reductions in long-lived isotopes in the repository, some will remain. The biggest regulatory challenge at Yucca Mountain—demonstrating compliance with radiation protection standards for up to a million years—will not be much different with or without GNEP.

Still, there is a second benefit of removing actinides from spent fuel. After about 70 years, many fission products will have decayed away. From then on, it is the actinides (specifically, isotopes of plutonium, americium, and curium) that will contribute most to radioactive decay heat. The amounts of heat are relatively small. After 100 years, a typical pressurized water reactor fuel assembly weighing about half a ton will generate only about 200 watts of heat, and after 1,000 years, about 50 watts. Nevertheless, in a repository containing, say, a hundred thousand such fuel assemblies, the total amount of heat is significant. Making sure that it dissipates without causing overheating of the waste canisters or surrounding rock is an important design consideration.

Removing actinides means that waste canisters in the repository could be packed closer together without violating thermal limits. This would increase storage capacity, which in turn could reduce the number of repositories. Energy Secretary Sam Bodman recently suggested that GNEP has the potential to postpone a second U.S. waste repository indefinitely, even if nuclear power growth does resume. This appeals to politicians, who are almost desperately eager to avoid another politically painful repository-siting process. But offsetting this promise is the requirement, also implicit in GNEP, to find sites for new reprocessing plants, fuel and target fabrication facilities, and fast-spectrum burner reactors. Each of these may be easier to site than a second waste repository, though perhaps only marginally so. But GNEP is likely to increase the quantity of required nuclear sites, possibly by a large number.

Moreover, as far as prospects for the Yucca Mountain repository itself are concerned, the space-conserving attributes of GNEP are not obviously positive. If there are any people left in Nevada still favorably disposed to the project, the idea of a blank check enabling the nuclear power industry to dispose of all future waste at Yucca Mountain may be enough to tip them over the edge.

In sum, the GNEP initiative is at best a policy for the long term. It will have little bearing on the near-term outlook for nuclear power. It will not help the nuclear plants currently at the planning stage to move ahead. It will have little or no beneficial effect on the Yucca Mountain repository’s application for a license. And it is not a substitute for the Yucca Mountain project. This is not an indictment of GNEP, but it is important to be clear about what problems GNEP will not solve as well as those that it might.

A near-term option

Is there a back-end policy that would have a near-term impact? One possibility would be for the federal government to establish a centralized storage system for spent fuel. In such a system, the fuel, after cooling for several years in water-filled pools adjacent to the reactors, would not be reprocessed but would simply be shipped offsite and stored for several decades either at a few regional facilities or at a single national facility. The fuel would be contained in dry casks: sealed metal cylinders enclosed within thick concrete outer shells. Passive air cooling of the surfaces of these concrete casks is sufficient to remove the spent-fuel decay heat. The thick concrete shells provide protection against floods, tornados, and projectiles, as well as shielding against the radiation emitted by the fuel. Offsite storage in concrete casks would also present fewer risks of terrorist attack than leaving the fuel where it is, in reactor storage pools.

If the federal government were to launch a new centralized spent fuel interim-storage initiative during the next year or two, it could then credibly guarantee to take ownership of the spent fuel discharged by the currently planned new power plants and move it offsite within, say, 10 years of discharge. Such a guarantee, which would be consistent with existing federal obligations under the Nuclear Waste Policy Act, would be the single most effective action the government could take to reduce waste management uncertainties confronting investors and would surely help these projects move forward. In this it is superior to GNEP.

In the past, some in industry and government have opposed a federal spent fuel interim storage facility because they feared it might deflect attention from developing a final repository. Such fears now seem beside the point. The Yucca Mountain project is now sufficiently far advanced that whether it eventually succeeds or fails will hinge on other issues. As a matter of fact, storing the spent fuel offsite for an extended period could even improve Yucca Mountain’s prospects. It might lessen the pressure to freeze the repository design, allowing more time for the emergence of other technical approaches that would enhance the project. But these are speculations. The prudent assumption is that there would be no positive impact. In this, however, interim storage would be no different from GNEP.

A long-term comparison

Storing spent fuel in dry casks is feasible for several decades and perhaps for much longer. But it is not a permanent solution. At some point, the fuel must either be disposed of once and for all (or, alternatively, reprocessed and the resulting waste disposed of). Will the country be better off with an interim strategy of centralized cask storage followed by direct disposal or with a GNEP strategy based on reprocessing? The comparison must address each of the issues relevant to a back-end policy: waste disposal, nuclear proliferation, safety, and economic competitiveness. It must also consider financial and political feasibility.

Waste disposal. The opportunity to delay or even eliminate the need for a second or subsequent repositories is a major motivation for GNEP. Removing heat-emitting actinides and increasing the canister packing density would mean that more waste could be stored at Yucca Mountain. To exploit this advantage, Yucca Mountain’s statutory capacity limit of 70,000 metric tons (MT) equivalent of spent fuel would have to be lifted. Congress is likely to take this step, given its strong desire to avoid another repository-siting imbroglio. But relaxing the limit would allow substantially more waste to be stored at Yucca Mountain even without extracting the actinides. The 70,000 MT limit was imposed originally for political rather than technical reasons. The actual physical capacity of Yucca Mountain is believed to be considerably greater; according to one recent estimate, from four to nine times greater. Moreover, the approach of simply delaying disposal by allowing the spent fuel to cool in concrete casks for several decades would also increase the effective storage capacity of the repository. In the end, the GNEP strategy might enable more waste to be squeezed into the Yucca Mountain facility. But whether this would translate into a practical advantage over interim storage is not clear, since in neither case would it be necessary to open a second repository for at least several decades.

Nuclear proliferation. Nuclear power creates potential security risks. Unless the risk of misusing the commercial nuclear fuel cycle to gain access to technology or materials as a precursor to acquiring nuclear weapons is kept low, nuclear power will not fulfill its potential as a global energy source. According to the Bush administration, a key goal of GNEP is to field fuel cycle technologies that are not only less waste-intensive but also more proliferation-resistant. In fact, the most important element of the GNEP initiative from a nonproliferation perspective is not the advanced reprocessing and recycling scheme, but rather the proposal to establish an international consortium of advanced nuclear nations to guarantee fuel cycle services to other countries, especially in the developing world. If implemented effectively, this proposal could indeed reduce incentives for such countries to build enrichment or reprocessing facilities of their own. It deserves strong support.

The GNEP proposal to introduce full actinide recycling is also advertised as contributing to nonproliferation. But whether its impact is positive or negative depends on what it is being compared to. If the countries that now do conventional PUREX reprocessing—principally France, the United Kingdom, Russia, and Japan—were to adopt the UREX+ process, and as a result stopped accumulating separated plutonium, the outcome might be positive. But in practice, the proposed UREX+ strategy of keeping the plutonium mixed with other, more radioactive transuranic isotopes might not create much of a radiation barrier to potential proliferators. Recent calculations suggest that the radiation level would not be particularly high—perhaps as much as several orders of magnitude lower than the intense radiation fields associated with spent fuel. So adoption of UREX+ reprocessing of light-water reactor fuel combined with reprocessing of advanced burner reactor fuel would introduce sizeable new flows and stocks of contaminated plutonium that might be only marginally better protected from would-be proliferators than pure plutonium, and much less protected than if the plutonium simply remained in the spent fuel. For countries that are not now reprocessing, including the United States, this would not be a positive development. Additional protection could be obtained using a variant of the UREX+ process that mixed certain radioactive fission products in with the plutonium and the minor actinides. But these fission products would have to be separated from plutonium before fuel fabrication and then recycled back to the waste stream, adding further, costly steps to the GNEP fuel cycle.

Another argument for GNEP, and for reprocessing more generally, is that it will eliminate the risk that the plutonium could eventually be recovered from a repository containing unreprocessed spent fuel and then used for weapons. (The French have recently cited this “plutonium mine” scenario as one of their primary motivations for continuing to reprocess.) The technical feasibility of plutonium recovery will indeed increase with time as the fission product radiation barrier decays away. On the other hand, the deeper and more remote the repository, the less plausible such a scenario. In any case, it is difficult to assess the significance of avoiding this risk. The value of eliminating one particular technical means for malevolent behavior that might or might not occur centuries or millennia from now is a question perhaps better addressed by philosophers than by engineers or economists.

Still, the proliferation tradeoff between GNEP and the dry-cask storage and disposal strategy is conceptually clear. In the case of GNEP, the goal of ensuring that no plutonium would be available to nuclear felons centuries from now is achieved at the price of an elevated proliferation risk during the several-decade period between waste generation and disposal, as well as the additional economic costs, plus health, safety, and environmental risks, incurred during that time. In the case of dry-cask storage and disposal, the goal of minimizing proliferation risks during that interval is achieved at the price of not being able to rule out malfeasance involving plutonium hundreds of years into the future. Although this is a tradeoff on which reasonable people could disagree, the advantage clearly seems to lie with dry-cask storage and disposal.

Safety. The safety and environmental performance of dry-cask technology for spent fuel storage has been demonstrated at more than 30 U.S. nuclear power plant sites, where casks of various designs have been in use for up to 20 years. A centralized spent-fuel storage facility, although larger in scale, would be essentially identical in concept. In contrast, the complex fuel cycle envisaged by GNEP will require the development of a host of new technologies and facilities. A major engineering and regulatory effort will be needed to assess the safety and environmental performance of an integrated GNEP fuel cycle system. It is possible that safety risks from such a system could be reduced to the level of those of a dry-cask storage facility, although this seems unlikely if only because there would be so many more GNEP facilities and sites. Also, the historical safety record of reprocessing plants around the world has not been good. Once again, the advantage lies with the dry-cask storage option.

Economic competitiveness. Unfavorable economics has been one of the main barriers to new nuclear power plant investment in the United States for nearly three decades, and it remains a major concern. Keeping generating costs down will be crucial for future nuclear power plants selling their electricity into competitive wholesale power markets.

The costs of reprocessing and recycling envisaged by GNEP are uncertain, because several component technologies are still undefined. But a useful benchmark is conventional PUREX reprocessing and recycling of MOX fuel, for which cost information is available. This information makes clear that the conventional MOX fuel cycle is more costly than the alternative of not reprocessing. There is no dispute about this, although opinions differ about how large the cost penalty really is.

Much of the disagreement hinges on arguments about who should pay the penalty. The French argue that the MOX cycle cost penalty is too small to worry about. Not coincidentally, they assume that the cost will be borne by the entire fleet of power plants, not just the ones that are using MOX fuel. With that assumption, together with optimistic but not implausible assumptions about the costs of PUREX reprocessing and MOX fuel fabrication, the impact on the overall costs would indeed be fairly modest: The fleet-average fuel cycle cost would increase by about 40% and the total nuclear electricity cost would increase by about 4%. In effect, the nuclear industry would have to pay a recycle tax of about 0.25 cent per kilowatt hour, on top of the tax of 0.1 cent per kilowatt hour it currently pays to the federal government for (yet to be delivered) waste disposal services.

In France, with its single national utility, it is reasonable to assume a cross-subsidy in which all commercial nuclear power plants pay for the higher fuel cycle costs of the relatively small number that would be doing the recycling. But in the United States, unless this requirement were imposed by regulatory fiat, nuclear plant operators opting for recycling would either have to absorb the entire cost increase themselves or pass part or all of it on to their customers. That cost increase would likely be prohibitive. If we again use the conventional MOX cycle as the benchmark, it would mean a 300% increase in the nuclear fuel cycle cost, or roughly a 20% increase in the total cost of electricity. Given the choice, a private nuclear generator would not shoulder such a burden on its own; a government subsidy would be needed. The total subsidy would amount to roughly $2 billion per year for a nuclear power plant population the size of the current U.S. fleet.

Could technological advances reduce the cost of full actinide recycle below the conventional MOX cycle cost? This cannot be ruled out, but common sense suggests that it is unlikely. Full actinide recycle is inherently more demanding than the current version of the closed fuel cycle, which seeks only to recover and reuse plutonium in the spent fuel. The GNEP vision entails a complex large-scale extension of the existing nuclear power industry, with scores of burner reactors and associated reprocessing and fuel fabrication facilities and major additional stocks and flows of nuclear materials. Reducing the costs of all this should be an important R&D objective. But the only sensible assumption today is that this add-on will not be economically viable on its own, which means that it would require a much more active government role in the nuclear industry than at present. It is difficult to reconcile this vision with the goal of maximizing the economic viability of nuclear power in an increasingly competitive electric power industry. Small wonder that the nuclear industry has greeted the GNEP initiative politely, but coolly.

The centralized storage plus direct disposal alternative would also require increased government involvement, but this would be limited to one or at most a few storage facilities, and the additional cost is unlikely to exceed 1% of the total cost of nuclear electricity, or about 0.05 cents per kilowatt hour. Once again, the advantage seems clearly to lie with the centralized storage option.

The biggest objection to direct disposal of spent fuel is that closing the fuel cycle will extend fuel supplies. The once-through fuel cycle is credible in the long term only if sufficient uranium ore is available at reasonable cost to support a large-scale expansion of nuclear power. Present data suggest that the necessary uranium will be available at an affordable price for a very long time, even with such an expansion. Eventually, perhaps, the price will rise high enough to justify plutonium recovery and recycle on economic grounds. Although this does not seem likely today, the possibility exists. Thus, it is understandable that some are reluctant to “throw away” plutonium by disposing of spent fuel directly. The virtue of interim dry-cask storage is that no decision need be made, either to reprocess or permanently dispose of the fuel, for decades.

What next?

In sum, on every important count, centralized dry-cask storage will serve the Bush administration’s own objectives for nuclear power as well as or better than the GNEP fuel cycle scheme for at least the next few decades. Unfortunately, the Bush administration has its priorities backward. It has put the GNEP initiative front and center but is taking no action on centralized spent fuel storage. Yet it is the latter that can help pave the way for a new round of nuclear power plant construction, and it is the latter that will more nearly achieve the goal of a safe, economically competitive, and proliferation-resistant fuel cycle for the next several decades.

At this stage, the government should be supporting a rigorous R&D effort, but not a major program of reactor and fuel cycle demonstration projects, and certainly not a complete reorientation of the back end of the fuel cycle.

Eventually the fuel-conserving and repository space–conserving attributes of the GNEP fuel cycle might deliver real value. Or they might not. At this stage, the government should be supporting a systematic, rigorous R&D effort, but not a major program of reactor and fuel cycle demonstration projects, and certainly not a complete reorientation of the back end of the fuel cycle. The goals articulated by President Bush in introducing the GNEP initiative deserve strong support, but the most important priorities for achieving them are:

First, focus in the near term on helping to bring the first group of new nuclear power plant projects to fruition. Significant risk-reducing measures are already in place for the first several thousand megawatts of new nuclear capacity. The administration should now concentrate on how to reduce the spent fuel risk perceived by investors in those projects. It should be mindful that an early commitment to reprocessing will not reduce those risks, and could well increase them.

Second, implement a plan to move commercial spent fuel from reactor sites to one or a few secure federal interim storage facilities as quickly as possible. Siting such facilities will be difficult, but no more so than siting new GNEP fuel cycle facilities.

Third, work with other advanced nuclear nations to provide comprehensive fuel cycle services, including spent fuel storage services, to countries that agree not to invest in their own enrichment and reprocessing plants, as called for in the GNEP initiative.

Fourth, launch a broad, balanced R&D program to prepare new fuel cycle options for deployment beyond 2050. The program should focus on both advanced once-through fuel cycles and closed-cycle options. Key elements should include:

  • A uranium resource evaluation program to determine with greater confidence the global uranium resource base.
  • Development of next-generation geologic disposal technologies for spent fuel and reprocessed waste, including incremental engineering and materials improvements to the mainstream mined repository approach as well as more far-reaching innovations such as deep borehole disposal. These advances could offer repository risk reduction benefits as large as those claimed for full actinide recycle.
  • Development and evaluation of alternative fast burner reactors and fuels.
  • Development and evaluation of advanced reprocessing technologies.

For the next 5 to 10 years, emphasis should be on fundamental research and laboratory-scale experiments. This should be supported by strong systems analysis and extensive modeling and simulation, and should explore a broad range of reactors, fuel types, and fuel cycles.

The Bush administration is pushing for early selection of particular fuel cycle technologies and early commitment to large-scale multibillion-dollar demonstration projects. But these are neither necessary nor wise at this stage, and there is a real risk that they will siphon off financial resources and public support. It would be sad—and ironic—if this pronuclear administration, presiding over the most promising environment for renewed nuclear power growth in decades, ended up undermining the prospects for a nuclear revival.

DEE-FENSE! DEE-FENSE!: Preparing for Pandemic Flu

Vaccination to prevent viral and bacterial diseases is modern medicine’s most cost-effective intervention. Were a vaccine to be available quickly after the onset of the widely predicted pandemic from an H5N1 strain of avian influenza, it might save scores of millions of lives worldwide. But that option is not feasible.

Why can’t a country that developed the atomic bomb (60 years ago) and the polio vaccine (50 years ago) and put a man on the moon (almost 40 years ago) now produce an appropriate vaccine? The answer is an unfortunate confluence of biology and public policy.

During the past several years, an especially virulent strain of avian flu, designated H5N1, has ravaged flocks of domesticated poultry in Asia and spread to migratory birds and (rarely) to humans and other mammals. It has been detected in much of Europe, Asia, Africa, and the Middle East, and it continues to spread with each seasonal migration of wild birds. Since 2003, there have been over 200 cases of H5N1 infection in humans, more than half of whom have died—a shockingly high mortality rate for an infectious disease.

Public health experts and virologists are concerned about the potential of this strain because it already possesses two of the three characteristics needed to cause a pandemic: It can jump from birds to humans, and it produces a severe and often fatal illness. If additional genetic evolution makes H5N1 easily and sustainably transmissible among humans—the third characteristic of a pandemic strain—a devastating worldwide outbreak could become a reality. The ease and frequency of worldwide travel could give rise to the first true jet-age flu pandemic.

Although it is not possible to predict the timing of that last evolutionary step, because the genetic changes that would give rise to it are wholly random molecular events, mutations occur each time the virus replicates, so the more H5N1 viruses are produced, the more likely it is that the event will occur. As avian flu spreads and more birds are infected, there are trillions more virus particles in existence every day. Flu can also evolve when both human and animal strains of flu infect a person or animal simultaneously, offering an opportunity for swapping segments of nucleic acid that code for viral proteins. That process, too, is favored by the presence of more viral particles in more locations around the world.

Some background is necessary to understand the threat and the possible public health, economic, and political consequences of a flu pandemic. The exterior of the flu virus consists of a lipid envelope from which project two surface proteins: hemagglutinin and neuraminidase (N). The virus constantly mutates, which may cause significant alterations in either or both of these proteins, enabling the virus to elude detection and neutralization by the human immune system. A minor change is called genetic drift; a major one, genetic shift. The former is the reason why flu vaccines need to be updated from year to year; an example of the latter was the change in subtype from H1N1 to H2N2 that gave rise to the 1957 pandemic. This new variant was sufficiently distinct that people had little immunity to it. The rate of infection with symptomatic flu that year exceeded 50% in urban populations, and 70,000 people died from it in the United States alone.

Ordinary seasonal flu, which is marked by high fever, muscle aches, malaise, cough, and sore throat, is itself a serious illness that on average kills 36,000 annually in the United States, but the pandemic strains are often both qualitatively and quantitatively worse. The H5N1 strains of bird flu have a predilection for infecting the tissues of humans’ lower respiratory tract; that is, deep down in the smaller airways and in the tissues where oxygen exchange takes place, where it elicits hemorrhage and “cytokine storm,” an outpouring of hormone-like chemicals that causes huge amounts of fluid to accumulate in the lungs. In this way, these pandemic strains of flu may kill within 24 to 48 hours of the onset of symptoms.

By contrast, seasonal flu most often kills not directly but via secondary bacterial infections that follow the initial viral infection of the upper respiratory tract. (Seasonal strains’ affinity for the upper respiratory tract also helps to explain why they spread so rapidly: The virus particles are readily expelled by coughing and sneezing, and when other people are thereby exposed, the viruses need only travel a short distance in the body in order to attach and infect.) A strain of bird flu that is able to infect both the upper and lower respiratory tracts, similar to one isolated from a patient in Hong Kong in 2003, would have the potential to cause a highly lethal pandemic.

Misguided policies

The problems related to the biology of the flu virus have been compounded by public policy decisions by Congress and the government’s executive branch that ensure low return on investment and high exposure to legal liability for vaccines. Why should companies make products that are only marginally profitable and whose sale, even in the absence of any negligence or wrongdoing, carries the threat of huge financial risks from lawsuits?

Several kinds of policies are responsible for our vaccine quagmire. The Vaccines for Children Program, for example, was an innovation of the Clinton administration that disrupted market forces and dealt a blow to vaccine producers. Established in 1994, it created a single-buyer system for children’s vaccines, making the government by far the largest purchaser of childhood vaccines, at a mandated discount of 50%. Try extorting that kind of discount from manufacturers of vehicles for the U.S. Postal Service or of Meals Ready to Eat for the Department of Defense, and see how long they bid on customer contracts.

Arbitrary and excessively risk-averse regulation is another obstacle. The Food and Drug Administration (FDA) has been especially tough on vaccines, continually raising the bar for approval. The agency required huge clinical trials—more than 72,000 children (and another 44,000 in post-marketing studies)—of a recently approved vaccine against rotavirus (a common, sometimes fatal gastrointestinal infection in children) in order to be able to detect even very rare side effects before approval. In fairness, one does need to be concerned about a new vaccine that is intended for large numbers of healthy people; even a rare but serious side effect in a drug administered to hundreds of millions of people could have significant impacts. Thirty years ago, the federal government attempted to administer “swine flu” vaccine to the 151 million Americans age 18 and over, but the program was halted after a small number of individuals suffered generalized paralysis after vaccine administration.

WE NEED INCENTIVES FOR INDUSTRY TO DEVELOP THE PRODUCTS THAT WE NEED, AND THE FDA’S GATEKEEPER FUNCTION FOR NEW MEDICINES SHOULD NOT BE PERMITTED TO DELAY CLINICAL PROGRESS UNDULY.

The challenge for regulators is to find an appropriate balance of pre- and postmarketing clinical trials that demonstrate a vaccine’s ability to reduce the incidence of actual community-acquired infections, or that measure efficacy by means of surrogate endpoints such as laboratory measures of antibody-mediated and cellular immunity. In recent years, U.S. regulators have frequently been overly conservative in their requirements. They have rejected evidence of safety and efficacy from European and Canadian vaccine approvals and prematurely withdrawn lifesaving products from the market because of mere perceptions of risk.

It is difficult to guess how long the required clinical trials and data analysis might take for a new vaccine against pandemic flu, but if regulators fail to use surrogate endpoints appropriately, it could be years; a catastrophic delay in the event of a pandemic.

The FDA’s recent announcement of policies intended to streamline the development and approval of annual and pandemic flu vaccines offers some cause for optimism, but even this advance must be qualified. The agency published “guidance,” not “guidelines,” for vaccine developers. The critical difference (in regulation-speak) is that guidelines bind the agency to a certain action (the licensing of a vaccine, for example) if the conditions specified in the document are met, whereas guidance is only advisory. In fact, lest that distinction go unnoticed, every page of the guidance documents admonishes in bold font,“Contains Nonbinding Recommendations.” This provides the FDA with commodious wiggle room. Another disappointment is the conspicuous absence of any mention of reciprocity with approvals by foreign regulators such as the European Medicines Evaluation Agency. Reciprocal approvals would obviate the need for companies to meet the slightly different but largely redundant requirements of many different regulatory agencies.

As the result of our flawed public policy, innovation has suffered and vaccine producers have abandoned the field in droves, leaving only four major manufacturers and a few dozen products. We are woefully short of capacity for the production of a vaccine against a pandemic strain of flu, which cannot actually begin until we have it in hand and have performed various genetic manipulations so that it doesn’t kill the chicken embryos in which flu vaccines currently are grown. An optimistic estimate is that there is sufficient flu vaccine capacity worldwide for approximately 450 million people, but that calculation assumes that two intramuscular inoculations of 15 micrograms each would confer protection. Recently developed experimental vaccines against H5N1 required two doses of 90 micrograms. That suggests that the true capacity might be closer to only 75 million people, or a little more than 1 of every 100 people on the planet.

Another worry is that when a pandemic strain of H5N1 avian flu appears, virtually all of the world’s flu vaccine development and production capacity might shift to producing a vaccine against it, which will leave us vulnerable to the nonpandemic strains that cause the usual annual or seasonal flu. As Anthony Fauci, director of the U.S. National Institute of Allergy and Infectious Diseases, has observed, “The biggest challenge unequivocally is vaccine production capacity.”

Remedying that will not be easy. Currently, it requires five to six years (and a massive investment) to build and validate a new manufacturing plant to the satisfaction of regulators. Moreover, the currently available vaccines are made using half-century-old technology: the cultivation of live virus in scores of millions of fertilized chicken eggs.

Hope from the lab

Some good news concerning vaccines is emanating from research laboratories. Several recent advances suggest ways to induce a potent immune response to the H5N1 strain, but our ability to translate these findings into commercial products is a long way off.

Using genetically engineered common-cold viruses, two separate U.S. laboratories have successfully vaccinated mice against various strains of H5N1 bird flu. Both teams used adenoviruses that were genetically modified to incorporate the gene that expresses hemagglutinin, a surface protein of H5N1, so that they are unable to replicate. In effect, these are adenovirus-influenza hybrids.

Injected into mice, various versions of the vaccines generated a potent immune response that consisted of both antibodies and activated white blood cells and that protected the animals against a challenge by high doses of H5N1. Significantly, the vaccines were able to protect against viruses that did not precisely match the strains from which the hemagglutinin was derived.

REGULATORS CAN WORK WITH COMPANIES TO APPROVE DEVELOPMENT APPROACHES AND MANUFACTURING FACILITIES IN ADVANCE OF THE ACTUAL PRODUCTION OF PANDEMIC FLU VACCINE.

Conventional flu vaccines induce only antibodies, a limitation that requires new vaccines to be developed constantly to keep up with the mutating, evolving virus. The dual response and the cross-protection seen in these experimental vaccines is important because it increases the likelihood that they will be at least partially effective against newly arising variants of H5N1.

Another recent development offers a possible generic method to enhance the immunogenicity of many different vaccines. Researchers at the University of British Columbia used genetic engineering techniques to incorporate into various viral vaccines two proteins that help cells of the immune system to process foreign antigens. They found that these proteins act as a potent booster, inducing the immunized recipient to produce greater numbers of immunologically active cells against foreign antigens also contained in the vaccines. In their animal model, in which a challenge of a potentially lethal dose of virus was administered after vaccination, one of their engineered vaccines “provided protection against a lethal challenge . . . at doses 100-fold lower” than controls that did not have the modification. Although these experiments involved viruses other than influenza, the technique should be applicable as well to flu and to adenovirus-flu hybrids.

Scientists are also working on ways to boost the immunogenicity of vaccines by adding chemical ingredients known as adjuvants, which make it possible to use lower doses of the vaccine antigens themselves. Adjuvants are not specific to particular antigens but act in various ways to activate one or more components of the immune system. They may help to display vaccine antigens to appropriate antigen-presenting-cell (APC) types; to target particular intracellular APC compartments for optimal antigen presentation; or to induce appropriate APC maturation steps that increase the stimulation of T lymphocytes, activate antibody production, and induce immune memory.

France’s Sanofi-Pasteur and Australia’s CSI have begun trials of candidate pandemic vaccines that use adjuvants made of alum, an aluminum salt, the only adjuvant currently approved for use in humans in the United States. California-based Chiron Corporation might have a more promising candidate. In clinical trials of an adjuvant called MF59, which has been incorporated into a candidate vaccine being tested for protection against avian flu strain H5N1, vaccine containing adjuvant was significantly better than vaccine alone at eliciting antibodies to H5N1. An important potential advantage of this adjuvant-containing vaccine is the discovery that it may offer protection against H5N1 even if the virus’ cell-surface proteins change, or “drift,” in a way that makes them slightly different immunologically.

That suggests a viable, if not optimal, strategy to prepare for the pandemic: Stockpile vaccine against the current avian flu H5N1 strain, with adjuvant added to boost the immune response. Although it would not be a perfect match to the pandemic strain, it might be useful as a first “priming” dose that could afford some protection until vaccine against the actual pandemic flu strain is available.

An obstacle to this approach is that MF59, used in European vaccines since 1997, has never been approved for use in a vaccine sold in the United States, at least partly because R&D on vaccines has become so unprofitable and unattractive that there has been little incentive for vaccine developers to perfect a technology to boost their efficiency or to perform the expensive clinical testing necessary to license what regulators would regard as a new vaccine technology. Also, the addition to existing vaccines of an adjuvant, even one with a long history, would make a previously approved vaccine a “new drug” that would require exhaustive testing (especially given that the products would be administered to very large numbers of healthy people).

Various research groups are studying alternatives routes of administration of vaccines (intradermally, for example, instead of via the usual intramuscular route) in order to be able to use smaller dosages and/or to elicit heightened immune responses. One study found that the dose of flu vaccine administered intradermally could be reduced to 40% of the usual intramuscular dose without compromising the immune response.

Opting for a conservative strategy, British health authorities have ordered sufficient conventionally produced vaccine against the actual pandemic strain to treat every person with the needed two doses. The limitation of this approach is that because production cannot begin until the pandemic begins and the virus is in hand, there will be a substantial lag: perhaps nine months or more until the vaccine is available. Why so long? The producer must demonstrate that the vaccine can be manufactured in batch after batch to high levels of purity and potency as well as conduct and analyze clinical trials. Thus, although this approach is the most definitive in the long run, it would leave the population vulnerable to the first wave of the pandemic.

In sum, the good news is that once we know the genetic sequence of the pandemic strain, we can reverse-engineer flu virus and get a candidate vaccine into animal trials rapidly; various chemicals can be used to enhance the immune response; and we have good early prototype “subunit” vaccines that use only a single gene from the flu virus, can be grown in cultured cells instead of chicken eggs, induce both antibody-based and cellular immunity, and show a high degree of effectiveness at protecting mice against challenge with a variety of H5N1 strains.

The bad news is that there are prodigious obstacles to translating these developments into a clinical setting, let alone into a commercial human vaccine.

First, adenovirus infections are extremely common in children and adults, and if recipients have previously been infected with the particular adenovirus strain used in the vaccine (there are dozens of different ones that infect humans), they may be “immune” to the vaccine. In other words, they’ll ward it off before it can carry out the “controlled infection” necessary to elicit immunity to the engineered adenovirus-flu hybrid.

Second, some adenoviruses are thought to have the potential to induce malignancies, which will likely elevate the threshold for regulatory approval.

Third, the adenovirus-flu hybrid vaccines rely on the flu gene that expresses the viral surface protein hemagglutinin, which is notorious for the antigenic drift or shift that enables the flu virus to elude vaccines. Thus, even with the advantage of being able to elicit cellular (T lymphocyte–mediated) immunity as well as antibodies, it’s unclear how effective a vaccine against the current largely bird-specific H5N1 would be against an emergent pandemic strain.

Fourth, mice are not little humans, and it is difficult to extrapolate with confidence the results of mouse experiments to humans. (Our ability to predict efficacy in humans would have been greater had the investigators used transgenic mice engineered to have a human immune system.)

Fifth, a published analysis of thousands of bird flu samples taken from across southern China illustrates the difficulty of mounting an effective vaccine strategy against a possible pandemic strain (or even the bird-specific strains) of H5N1 avian flu. The authors concluded that the region, a reservoir of the virus for nearly a decade, has spawned divergent strains that have been spread as far as Russia and Eastern Europe by migratory birds. That diversity makes choosing a vaccine strain(s) problematical.“The antigenic diversity of viruses currently circulating in Southeast Asia and southern China challenges the wisdom of reliance on a single human-vaccine candidate virus for pandemic preparedness,” the authors wrote.

Sixth, getting a vaccine production facility (which is very different in design from one that produces conventional small-molecule drugs) up and running to the satisfaction of regulators currently requires five to six years and a huge financial investment, and we will need vast amounts of flu vaccine.

Seventh, in the absence of an actual pandemic or of government-guaranteed vaccine purchases or payments for meeting R&D milestones, that investment might be for naught.

Finally, regulatory obstacles, especially where new technologies are involved, are daunting.

In summary, government and private-sector funding of high-quality research projects is bearing fruit, but the recent research advances leave us far from real-world solutions.

Government both giveth and taketh away, and in recent years, the latter has dominated public policy. We need incentives for industry to develop the products that we need, and the FDA’s gatekeeper function for new medicines should not be permitted to delay clinical progress unduly. The agency needs to develop and implement a plan for active collaboration on and rapid review of candidate vaccines. As has not been the case since World War II’s Manhattan Project to develop the atomic bomb, we need a robust government–university–private-sector partnership (with cooperation on issues much broader than just vaccine development) to counter a universal and dire threat.

Which strategies should we adopt? My answer is a wide variety, simultaneously and as expeditiously as possible. Just as the Manhattan Project pursued at least three methods to enrich uranium for the needed isotope U-235 on independent parallel tracks, we need to set in motion many research approaches. The Manhattan Project was arguably the most ambitious and successful R&D undertaking in history, and the threat of an avian flu pandemic argues for a similar approach: numerous parallel strategies pursued on many fronts. There is some acknowledgement of this philosophy in the Bush administration’s National Strategy for Pandemic Influenza Implementation Plan, published by the Homeland Security Council in May 2006, but its incentives for R&D are both tentative and vague. The details that are relevant to vaccines and anti-influenza drugs are unimpressive, consisting primarily of contracts for decidedly inadequate stockpiles of drugs and un-adjuvanted, probably ineffective vaccines against pre-pandemic strains of H5N1.

Vaccines are widely acknowledged to have high social value, but compared to therapeutic drugs, their economic value to pharmaceutical companies is low. Because governmental policies have caused market failures in vaccine R&D, government actions now must be an integral part of the solution.

A variety of incentives is needed to revitalize the portion of the private sector that has so long been so beleaguered by policymakers and regulators. Public policy must reward both inputs on vaccine R&D (via grants, tax credits, and the waiver of regulatory registration fees) and outputs of products (with guaranteed purchases, milestone payments when regulatory approval of new vaccines is granted, indemnification from liability claims, waiver of FDA user fees for vaccine reviews, and reciprocity between U.S. regulatory approvals and those in certain foreign countries). This effort should include aggressive funding of “proof-of-concept” R&D on various new technologies and approaches to making flu vaccine, to boosting the immune response to vaccines, and to creating greater reserve capacity for the commercial production of vaccines (including alternatives to vaccine production in eggs). And instead of being a major cause of the problem, regulators must become part of the solution. For example, they should work with companies to approve development approaches and manufacturing facilities in advance of the actual production of pandemic flu vaccine. In effect, the infrastructure would be ready to plug in the actual pandemic strain when it appears and to facilitate testing and regulatory approval.

Federal officials are largely responsible for the current lack of societal resilience needed to combat a flu pandemic. Now they must do more than fiddle while flu fulminates.

Let Engineers Go to College

The challenges that engineers will face in the 21st century will require them to broaden their outlooks, have more flexible career options, and work closely and effectively with people of quite different backgrounds. Yet engineering education focuses narrowly on technical skills rather than broadly on the full role that engineers must play in the world. Engineering education needs to develop a more comprehensive understanding of what engineers will do as their careers open up to management responsibility in business or to involvement in other areas. And if engineers are to have time for a greater variety of courses in their college years, the professional engineering credential will have to be a postgraduate degree, as it is in law, business, and medicine.

The environment for engineers and the nature of engineering careers in the United States are changing in fundamental ways. The issues with which engineers engage have become more and more multidimensional, interacting with public policy and public perceptions, business and legal complexities, and government policies and regulations, among other arenas. This is the natural result of technology becoming more and more pervasive in society and politics. Examples abound in areas such as energy, the environment, communications, national security, transportation, biotechnology, the service sector, food supply, and water resources. The engineer must now look outward and interact directly with non-engineers of many different sorts in many different ways.

Industry is increasingly global, with the result that the engineer must understand and deal with other countries and other cultures. Globalization and rapid advances in information technology are also rearranging the world employment market and functions for engineers. Many jobs that have traditionally been typical entry-level jobs for U.S. engineers are irrevocably going overseas, as we are reminded when we call help lines for assistance with computer software. Salary expectations for highly educated U.S. workers will steadily price them out of many jobs in a world that is so thoroughly connected by high-bandwidth communications. This trend will only accelerate.

Of course, many good jobs will remain in the United States, but skilled workers will find themselves changing employers and job functions more frequently than in the past. This trend results from amalgamation, restructuring, and downsizing of corporations; weakening incentives for employees to remain with an employer for career-long employment; and less job security, as well as the growing appeal of start-up companies and other entrepreneurial activities. These developments have a more powerful effect on engineering than on most other professions.

As a result of these changes, the interests of individual engineers and those of employers of engineers are diverging. The individual engineer aspires to the wherewithal for flexibility and movement, whereas employers seek those analytical and synthetic skills needed in the current job function. The society at large is moving in the direction of the individual engineer, seeing higher education as more of a private than a public benefit. This is reflected in higher tuition and fees for public higher education, diminished state support for public universities, and pressure on universities to make academic merit the sole criterion for admission. Whether this trend is good or bad is a subject for another forum, but the trend is real. To the extent that the individual has to pay the cost, the academic program should benefit the individual.

Traditional engineering education prepares the individual to fulfill a very specific role and offers little that would give the individual the flexibility to move into non-engineering areas or management. As a result, engineers are rare in Congress and other positions of public leadership. There have been relatively few among CEOs and other high-level decisionmakers.

The ability of engineers to move into other areas and to work effectively with non-engineers is limited by the narrowness and inward-looking nature of their education. Engineering is the undergraduate major that demands the fewest general education requirements. The rationale for this has been that the engineer’s need to know so much math, science, and engineering to be ready to work as a professional with only a bachelor’s degree leaves no time for other subjects.

When I received my undergraduate degree from Yale 50 years ago, I received a B.E. rather than the A.B. received by classmates in other majors. Whereas other diplomas were written in Latin, mine was in English. I was in the School of Engineering, rather than Yale College (a situation that subsequently changed with the incorporation of engineering into Yale College). The distinctions thereby drawn reflect long-standing controversies at liberal arts colleges as to whether engineering belongs there and, if so, in what form.

The image of engineering as a narrow discipline with an excess of required courses has made it difficult to attract students with wider outlooks, interests, and learning styles. Interestingly, graduate engineers have shown particular interest in the new master of liberal arts continuing-education degree programs, suggesting that in hindsight they perceive the narrowness of their education. The image of narrowness is thought by many to be a primary obstacle to increasing the number of women and minorities who become engineers. Yet the involvement of people of all sorts is certainly needed for the future of the engineering profession.

The solution to the excessive narrowness of engineering education is conceptually simple: The undergraduate curriculum should become more like a common liberal arts degree, and the specialized training needed to succeed as a professional engineer should be provided at the master’s level.

No comparable profession accepts a bachelor’s degree as adequate preparation for a career. The recognized professional degree, and hence the primary level of accreditation, is either the professional doctorate (as in medicine, dentistry, law, and pharmacy) or the master’s (as in business, public health, and architecture). These professions are predicated on a liberal arts bachelor’s. It is no longer realistic to expect to be able to build a sufficient base of mathematics and science, provide minimal general education, and create a practicing engineer within the confines of a four-year bachelor’s degree; yet that is what we still ostensibly do. We should instead establish the master’s as the recognized and accredited professional degree, and build from a broader liberal arts undergraduate degree. .

THE BEST PREPARATION FOR A PRODUCTIVE AND SATISFYING ENGINEERING CAREER IS A BROAD UNDERGRADUATE EDUCATION FOLLOWED BY PROFESSIONAL TRAINING.

The bachelor’s curriculum should provide enough variety that a graduate would also be well prepared for careers other than engineering. The graduate who does decide to continue engineering studies will have the foundation to play a number of roles open to engineers. As much as anyone else, engineers need to understand society and the human condition in order to function well in working with others and to enjoy a culturally enriching life. Another benefit is that a student will develop thinking and writing skills in a variety of contexts, not just engineering. During the undergraduate years, the student should also be able to spend time studying abroad to gain direct involvement in the culture, tradition, and values of one or more other countries. Future engineers will benefit from exposure to a variety of outlooks and ways of thinking.

A recent National Academy of Engineering report, Educating the Engineer of 2020 (National Academies Press, 2005), recognizes the desirability of additional education and recommends a pre-engineering degree or BA in engineering, followed by an MS that produces a professional or “master engineer,” stating that “industry and professional societies should recognize and reward the distinction between an entry-level engineer and an engineer who masters an engineering discipline’s ‘body of knowledge’ through further formal education or self-study followed by examination.” Accreditation would exist at both levels. Lengthening the educational span is surely a step in the right direction. However, the report implies that an undergraduate engineering degree is the recommended, or even required, path toward the graduate professional degree. That does not provide enough breadth and flexibility for undergraduates. Graduate engineering programs should be open to students with a wide range of undergraduate backgrounds in order to include the widest possible mix of backgrounds and interests in the engineering profession.

Graduate programs in law, business, and medicine are open to students with a wide range of undergraduate training. However, in some cases, graduate-level professional education does rely on certain courses or categories of courses being taken at the undergraduate level. An example is medicine, which itself is a close relative of, if not a form of, engineering. Medical schools are in general agreement that an entrant to medical school should have completed courses in a certain collection of subjects, but they do not encourage a particular major or group of majors at the baccalaureate level. Instead, they encourage diverse majors and often even take variety among backgrounds into account as a desirable criterion in composing an entering class. The same practice would be beneficial for engineering.

The professional master’s degree should logically be a two-year program, with a strong emphasis on a particular engineering discipline. But even the graduate curriculum should allow enough flexibility for experiential project work and for some students also to gain a deeper knowledge of science or to take some courses, or even a minor, in areas such as economics, public policy, law, or business.

The change to the master’s as the professional degree need not imply that engineering faculty would largely withdraw from undergraduate education. There will be a continuing need for early courses that exemplify the nature of engineering. Beyond that, engineering courses can themselves be part of the general education program of a university. A notable new initiative along those lines is that of the Center for Innovation in Engineering Education at Princeton University to create courses that make it possible for 90% of Princeton undergraduates to take at least one engineering course. Developing more engineering courses aimed at all undergraduates would help greatly in creating more technologically literate U.S. leaders.

Going further, there can also be an engineering or technology liberal arts degree that can draw students with much wider interests and career plans. Harvard, Yale, Dartmouth, Brown, and Lafayette already have such programs. These engineering AB degrees do not require the full math, science, and engineering requirements that are part of current accredited engineering programs and are not intended to be pre-engineering degrees. Graduates of these AB programs can proceed to medical, business, or law school, or they can take any of the highly varied career paths pursued by liberal arts majors, with the added value of having had substantial direct exposure to engineering. They can find rewarding fields where technical awareness is useful. Analogous liberal arts programs can be found in other fields such as biomedical sciences, legal studies, and business.

A substantial number of engineering graduates begin their education in community college. Making the master’s the professional degree would make it easier for those who attend community college to follow or move to the engineering track. This effect would probably have the additional benefit of increasing the ethnic and gender diversity of the engineering profession.

Change will not be easy. The current bachelor’s degree is well entrenched as the entry point for the profession. Additional education will increase expenses for students and universities. Most companies have been more than willing to hire bachelor’s graduates in engineering. Some value the lower salary that goes with a BS engineering degree— another example of the interests of individual engineers diverging from those of employers. Many students are pleased that a four-year engineering degree provides the near-term benefit of a good starting salary. The restructuring proposed here benefits engineering graduates over the long term by giving them more flexibility and the wherewithal eventually to earn even higher salaries and to enjoy more rewarding careers.

Students who might be put off by the need for more time in school should keep in mind that because the engineering curriculum has become so packed, the typical engineering student requires close to five years to earn a degree. The proposed new structure should make it much easier to complete a bachelor’s degree in four years. Depending on the budgeting policies of the institution, much or all of the institutional cost may be offset by engineering departments receiving higher funding per student because of the shift toward graduate-level education.

Other professions provide evidence that change can happen. Medical education steadily lengthened, became more uniform, and made the bachelor’s degree and pre-medical education prerequisites during the first half of the 20th century, largely because of an evolving consensus among medical schools. A similar, but less ordered, transition occurred for law. Pharmacy was originally accredited at the bachelor’s level but has recently converted to the doctor of pharmacy as the entry-level degree for the profession.

This change in the educational requirements for engineers will become common practice only if it is adopted by the accrediting organization, ABET, which in turn will respond to its constituents. Members of the National Academy of Engineering, engineering school faculty and administrators, and leading employers must develop sufficient consensus that the engineering education system needs to change to keep pace with changes in engineering and the world. Other professions have demonstrated that it can be done. Engineers are supposed to be the can-do people. This is an opportunity to prove it.

Nuclear Waste and the Distant Future

Although most of the radioactive material generated by nuclear energy decays away over short times ranging from minutes to several decades, a small fraction remains radioactive for far longer time periods. Policymakers, responding to public concern about the potential long-term hazards of these materials, have established unique requirements for managing nuclear materials risks that differ greatly from those for chemical hazards. Although it is difficult to argue against any effort to protect public safety, risk management will be most effective when each risk is evaluated in the context of other risks and balanced against the benefits produced by the regulated activity. Applying extremely stringent standards to one type of risk while other risks are regulated at a lower standard does not improve overall public safety. Similarly, foregoing a socially and economically valuable activity in order to limit relatively small future risk is not a sensible tradeoff. Therefore, developing an effective risk policy for nuclear power and radioactive waste requires looking at how the government regulates all hazardous waste and at the relative health and environmental effects of nuclear power as compared with those of other energy sources.

A key regulatory decision for the future of nuclear power is the safety standard to be applied in the licensing of the radioactive waste depository at Yucca Mountain (YM), Nevada. In 1992, Congress passed the Energy Policy Act, directing the Environmental Protection Agency (EPA) to promulgate site-specific standards for the YM nuclear waste repository project. Furthermore, Congress stipulated that these standards be consistent with the findings and recommendations of the 1995 National Research Council report Technical Bases for Yucca Mountain Standards (commonly called the “TYMS report”).

The standard that the EPA subsequently established was generally consistent with the TYMS report but differed significantly with respect to the compliance period. The EPA ruled that during its first 10,000 years, the YM repository must ensure that no individual in the adjoining Armagosa Valley would be exposed to more than 15 millirems (mrem) of radiation per year from use of the groundwater. The EPA chose the 10,000-year compliance period because that is the period already being applied to the Waste Isolation Pilot Plant repository in New Mexico and is the longest compliance period for any hazardous waste. However, the TYMS report concluded that there is “no scientific basis for limiting the time period of the individual risk standard to 10,000 years or any other value” and recommended that assessment be performed out to the time of peak risk to a maximally exposed individual, which may be several hundred thousand years in the future.

Opponents of the YM project challenged the EPA rules in court. On July 9, 2004, the U.S. Court of Appeals issued a ruling that denied all challenges, except one. The successful challenge, brought by the State of Nevada, argued that the EPA was not in compliance with the Energy Policy Act, because it had deviated from recommendations of the TYMS report by limiting the regulatory compliance time to 10,000 years. Thirteen months later, EPA issued a revised “twotiered” standard under which maximum exposure beyond 10,000 years will be limited to 350 mrem per year, which is roughly equivalent to the average background exposure for individuals across the globe. No detectable health damage has been associated with this level of exposure.

It should also be noted that in making its recommendation that standards be set for the period beyond 10,000 years, the TYMS report included two important caveats: that the EPA should consider establishing “consistent policies for managing risks from disposal of both long-lived hazardous non-radioactive materials and radioactive materials” and that the ethical principle of intergenerational equity should be considered in the formulation of safety standards.

Here we consider three central questions for the YM standard: What risk does YM pose beyond 10,000 years, how are other long-term risks regulated, and how might such long-term standards affect nearer-term human welfare? We find that the proposed EPA standard for YM does satisfy appropriate long-term safety criteria, and indeed the standard is much more stringent than EPA standards governing other sources of long-term risk. In addition, a risk/benefit analysis of nuclear power indicates that it is a safer choice than the fossil options that now dominate electricity generation.

Nuclear fission extracts large quantities of energy from extremely small masses of fuel. The small quantity of fuel used, as compared to fossil energy alternatives, makes it possible to manage nuclear wastes by isolation as a concentrated, contained solid rather than by release and dilution into the environment as is done with fossil fuels. The vast majority of radioactivity created in nuclear fuels disappears rapidly after reactors shut down, as short-lived radioactive elements (so-called fission products) decay to become stable elements over periods of hours to days. A modest fraction of radioactivity comes from fission products that remain radioactive for decades, and a very small fraction from radioactive isotopes—primarily heavy elements such as plutonium created by neutron capture, as well as some of their radioactive decay products—that persist for tens to hundreds of millennia.

Nuclear reactor safety focuses on providing multiple containment barriers and reliable cooling to allow for the safe radioactive decay of short-lived fission products after reactor shutdown. Interim storage of spent fuel in surface facilities can then permit further substantial reductions in heat generation from the smaller quantities of fission products that take multiple decades to decay. The remaining inventory of very long-lived isotopes could be further reduced—by factors of 40 to 100—by reprocessing spent fuel and recycling it in advanced “burner” reactors. With or without reprocessing, there remains a quantity of residual long-lived radioactive materials that must be stored and isolated from the environment.

A general scientific and technical consensus exists that deep geologic disposal can provide predictable and effective long-term isolation of nuclear wastes. Environments deep underground change extremely slowly with time, particularly when compared to the surface environment, and therefore their past behavior can be studied and extrapolated into the long-term future. The largest challenge for safety assessment for deep geologic isolation comes from predicting how the perturbation created by emplacing nuclear waste will change long-term chemical and hydrogeologic conditions—in particular the effect on surrounding rock of the heat generated by the waste over multiple centuries.

In the United States, a protracted and divisive political and technical process resulted in the selection, in 2002, of a national repository site at YM, sitting astride a federally owned area that overlaps the Nevada Test Site, Nellis Air Force Base, and Bureau of Land Management lands in southern Nevada. After a delay to revise its original license application, the U.S. Department of Energy (DOE) has recently announced that it will submit a construction license application for YM to the U.S. Nuclear Regulatory Commission (NRC) in 2008. Under current law, the NRC will have three years to evaluate this application, with a potential one-year extension, to determine whether the DOE repository design meets a safety standard established by the EPA.

Detailed technical review of YM performance will occur during licensing. In the interim, the 1999 Final Environmental Impact Statement (FEIS) provides a preliminary indication of potential long-term performance, assuming the disposal of 63,000 metric tons (MT) of spent fuel and 7,000 MT of defense waste. The peak risk occurs in about 60,000 years, when the waste canisters may become degraded, potentially allowing the radioactive material to be transported down to groundwater and subsequently to the Amargosa Valley. If one considers a worst-case scenario in which future Amargosa Valley residents possess technology for irrigated agriculture but do not employ any basic public health measures to test water quality for natural and human-generated contaminants and do not use the simple mitigative actions that our current public health practice employs, the maximum doses predicted by the FEIS would be of the same order as average natural background radiation, which generates no statistically detectable health effects. For its license application, DOE will implement further changes in repository design and modeling, which may result in somewhat lower long-term dose predictions than those reported in the FEIS.

Other risks

A large number of other important human activities also generate wastes that present persistent or permanent hazards. These include mining wastes; coal ash; deep-well injected hazardous liquid waste; and solid wastes such as lead, mercury, cadmium, zinc, beryllium, and chromium that are managed at Resource Conservation and Recovery Act (RCRA) and Superfund sites.

For these wastes, the longest compliance time required by the EPA is 10,000 years for deep-well injection of liquid hazardous wastes. For all forms of shallow land disposal, compliance times are substantially shorter. For RCRA solid waste management facilities, a typical permit is for 30 years, and the operator bears responsibility over a time horizon of less than a century. RCRA sites cannot reside in a 100year flood plain unless they are designed to resist washout by a 100-year flood. Although coal and mining wastes pose potential health risks, federal legislation excludes them from the category of hazardous waste.

The short regulatory compliance times for much hazardous waste do not mean that these materials do not pose any potential long-term danger. David Okrent and Leiming Xing at the University of California Los Angeles have analyzed what would happen over the long term at an approved RCRA site for the disposal of arsenic, chromium, nickel, cadmium, and beryllium. Assuming a loss of societal memory and the absence of monitoring or mitigation, individuals in a farming community at the site 1,000 years in the future would face an estimated 30% lifetime probability of cancer due to this exposure.

Indeed, instead of questioning the adequacy of nuclear waste safety standards, policymakers should be focusing on other risks.

The reason that most chemical risks are not subject to long-term regulation is not that policymakers are unaware of the danger. Rather, society has made a deliberate decision to place more weight on the analysis of near-term risks— as well as the benefits derived from these sources of risk— than on very long-term risks. It is also worth noting that some of these risks are not all that long-term. For example, current scientific understanding suggests that the peak risks from 20th- and 21st-century fossil fuel CO2 emissions may occur within several centuries, resulting in major ecosystem alteration, including substantial changes in ocean chemistry and a sea-level rise of up to seven meters.

Benefits

The threat of global warming associated with carbon-based energy sources highlights one of the primary advantages of nuclear power: very low greenhouse gas emissions. The health of the global and U.S. economy depends on energy. The 63,000 MT of commercial spent fuel that would be stored at YM will result from the generation of 2,200 gigawatt-years of electricity, worth $1 trillion, which in turn will support many additional trillions of dollars of economic activity. Although the Nuclear Waste Policy Act currently caps the capacity of YM at 63,000 MT of spent fuel, the actual performance-based capacity of YM is 2.5 to 5 times as large. And if the spent fuel is reprocessed and recycled in burner reactors, the performance-based limit would increase dramatically. YM would have the capacity to store all the waste from the nuclear electricity generation needed to power the country for centuries.

Near-term economic effects also deserve consideration. The government has already spent $8 billion on site selection and characterization for YM. It would cost at least that much to start looking for a new site. Because DOE has defaulted on its legal obligation to begin accepting spent fuel in 1998, storing commercial waste onsite at nuclear power plants now costs taxpayers some $360 million per year. Additional costs for protracted management of military high-level wastes at the Hanford, Savannah River, and Idaho sites will also be borne by taxpayers. Government could certainly find more productive uses for this money.

If nuclear power is not used to generate this baseload electricity, the obvious alternative is coal, which currently generates 54% of U.S. electricity. Indeed, U.S. utilities now have plans to install an additional 62 gigawatts of coal-fired generation. Using coal to produce the same amount of electricity that would be associated with 63,000 MT of spent fuel would require mining and burning 5 billion tons of coal: a full six years of current U.S. coal consumption. This would create 700 million MT of ash and flue-gas desulfurization sludge requiring shallow land disposal, discharge over 650 MT of hazardous mercury, and result in approximately 300 U.S. coal-worker fatalities. And on top of this, coal burning would produce an enormous quantity of carbon dioxide that would contribute to climate change.

In general, life-cycle assessments like those performed by the European ExternE project show that nuclear energy creates far smaller worker safety, public health, and environmental effects than does any form of fossil fuel use.

A reasonable standard

Forced for the first time to create a standard that extends beyond 10,000 years, the EPA has made a sensible choice. The TYMS report recommended that the EPA adopt a risk-based standard for YM falling inside the range of annualized risk that the EPA uses in regulating other materials. The TYMS report tabulated annual risk levels permitted by current EPA regulations for other materials, which range from one death in a population of a million to four deaths in a population of 10,000. For radiation doses, this would correspond to a range from 2 mrem per year to 860 mrem per year. The higher level is the current standard for radon in groundwater and indoor air. The TYMS report recommended that the EPA use values from the lower end of this range as a reasonable starting point in setting its standard for YM. The EPA’s draft revised standard sets the limit at 15 mrem per year for up to 10,000 years and adopts a post–10,000year standard of 350 mrem per year. For comparison, places such as Denver, Colorado, and Kerala, India, have background levels as high as 1,000 mrem per year, and we know of no cancer clusters in these areas. Thus, a level of 350 mrem per year would clearly meet the standard of avoiding very long-term “irreversible harm or catastrophic consequences,” something that cannot be said for current fossil energy use. If fossil fuels burned today result in global climate change in 50 or 100 years, there will be no way to reverse these effects. If in a few hundred or a few thousand years, future generations decide that the waste buried at YM is too dangerous or that a better way exists to manage it, they can remove it.

The safety standards recommended by the EPA for YM reflect a thoughtful assessment of risk and benefits. Indeed, instead of questioning the adequacy of these standards, policymakers should be focusing on other risks. The United States would be a safer place if this or any very long-term standard were applied uniformly to management of all types of long-lived hazardous waste, for the use of fossil fuels, and for other human activities as well.

Natural Gas: The Next Energy Crisis?

The day after President Bush’s State of the Union address on January 31, 2006, the headline in many U.S. newspapers and in the electronic media was: “America Addicted to Oil.” Indeed, a major newsworthy section of the speech was the president’s proposals to break that addiction, especially from suppliers in unstable countries that can affect U.S. national security. He set a goal of replacing more than 75% of oil imports from the Middle East by 2025, largely through technological means.

Whatever one thinks of the president’s policy proposals, he is correct in attributing security implications to the country’s oil addiction. At a minimum, its dependence requires the United States to trim its diplomatic sails when dealing with the major oil-producing countries, costs U.S. taxpayers a substantial premium to ensure access to oil supplies by maintaining a significant military capability in the Middle East, and gives major oil-producing states vast revenues that allow them to support foreign and domestic policies that complicate the security of the United States and its allies. There is also, of course, the undeniable fact that a strong U.S. economy—the backbone of U.S. preeminence in the world—does require, as the president stated, “affordable energy.”

But the president’s contention that the U.S. economy is petroleum-based is not entirely accurate. Although oil makes up approximately 40% of total U.S. energy consumption, coal and natural gas each now supply about 25% of the energy consumed by the United States. So, although oil is a major element in U.S. energy supplies, it is by no means the only significant factor. Disruption of natural gas or coal supplies would pose major problems to the U.S. economy. Moreover, there are increasing signs that in the case of natural gas, the country is headed down a road similar to the one it now faces with oil, with security implications that echo oil’s as well. In short, like addicts the world over who try to free themselves from one addiction only to become hooked on another, Americans may soon find that imported oil is not the only energy-source problem about which they have to worry.

Until recently, the United States was in pretty good shape when it came to natural gas. Prices were low and supplies sufficient. In 2000, for example, North America consumed nearly one-third of the world’s annual output of natural gas. Unlike oil, for which the United States, Canada, and Mexico together produced only 60% of the supplies they consumed, the three countries produced nearly 100% of the natural gas consumed. Bound together by free trade agreements, the continental market for natural gas more than doubled through the 1990s.

If energy experts inside and outside the government are correct, the proportion of total energy consumption accounted for by natural gas is likely to grow substantially during the next decade and a half. If current trend lines and government policies are sustained, about 90% of the projected increase in electricity generation will be fueled by natural gas plants. Between 2000 and 2004, U.S. electricity-generating capacity grew by approximately one-fifth; virtually all of that growth was gas-fired. Analysts predict that by 2020, more than one-third of the country’s electricity will be generated by burning natural gas. The reasons are well understood: Power plants that burn natural gas cost less and are far easier to build than are nuclear power plants, and they create fewer environmental problems than do coal or nuclear plants. With the expanding use of natural gas for homes and its use as the primary feedstock in the manufacturing process for a wide variety of products, demand for natural gas is expected to rise anywhere from 40% to 50% between 2000 and 2020.

The problem is that the available supply of natural gas is not keeping pace with this growing demand. In North America, production from existing wells is declining, and new wells show a more rapid rate of decline than in the past. As natural gas producers themselves have remarked, they have to run harder to stay even, which means digging more numerous but less productive wells.

Compounding the supply problem are two self-imposed impediments. The first is the nearly complete ban on exploring and developing prospective gas fields off the east and west coasts of the United States, off the Gulf Coast of Florida, and in large swaths of Alaska. In addition, federal government restrictions on exploration and drilling in the Rocky Mountain region, similar restrictions on new drilling in Canada, and the government-induced inability of Pemex (Mexico’s state-owned oil company) to afford expanding gas exploration at home have created a situation in which the country is fighting an energy crunch with one hand tied behind its back.

The second major problem lies in the area of delivery infrastructure. Demand requires not only a ready supply of natural gas but also a capacity to deliver that supply to consumers. The two modes for delivering natural gas are by pipeline and ocean-going ships designed to hold and transport vast amounts of liquefied (refrigerated and compressed) natural gas (LNG). In the first instance, state and local governments have made it increasingly difficult to build new pipeline networks. They have also complicated the transportation of LNG. There is plenty of natural gas outside of North America that can be transported to the United States at reasonable prices if there are places to unload and regasify the LNG for transport along an existing pipeline network. However, the United States has only five such sites, four of them built in the 1970s. There are plans to build more, but environmental and post-9/11 safety concerns have caused local communities to push back.

With the deregulation of the natural gas market in the late 1980s and the creation of a North American free trade region, supplies of natural gas more than kept pace with demand in the 1990s. The result was a decade of gas priced at $1.61 to $2.32 per million British thermal units (Btu). But as ready supplies of natural gas peaked, demand continued to increase, and as cold weather pushed demand even higher, gas prices rose to nearly $10 per million Btu during the 2000–2001 winter. The new average price remains well above that of the salad years of the 1990s, ranging from approximately $4 to $6 per million Btu in recent years, with a high of $14.25 per million Btu in the fall of 2005, in the aftermath of Hurricanes Katrina and Rita.

The implications of the mismatch between stagnant natural gas supplies and growing demand are obvious. If gas prices remain high and susceptible to large spikes in prices, the cost of producing power will rise, and manufacturing companies that rely on natural gas will increasingly think about moving out of the country and closer to their supplies. As former Federal Reserve Chairman Alan Greenspan remarked last year, “Until recently, long-term expectations of oil and gas prices appeared benign. When choosing capital projects, businesses could mostly look through short-term fluctuations in prices to moderate prices over the longer haul. The recent shift in expectations, however, has been substantial enough and persistent enough to influence business investment decisions, especially for facilities that require large quantities of natural gas.” Although power companies can pass along the rising costs to consumers, companies that use natural gas to produce products such as chemicals, fertilizer, and a host of other items will be driven to close plants in the United States and move overseas in an effort to cut costs and stay competitive in the global market.

Another result of the U.S. supply problem is that the gas needed to meet U.S. demand will, by necessity, increasingly come from overseas sources. As with oil, that fact has implications that go beyond the economic health of the United States. Almost two-thirds of the world’s natural gas reserves can be found in five countries: Russia, Iran, Saudi Arabia, Qatar, and the United Arab Emirates. Russia and Iran have almost half of the world’s reserves. The other major sources of reserves are found in West Africa and Latin America. Needless to say, these are not countries or areas marked with strong democratic credentials or close ties to the United States. Higher demand for gas at today’s higher prices will provide vast new revenues for those states and help sustain some very problematic governments. And as the global competition for energy resources heats up, it makes energy importers, such as Japan and most of Europe, more hesitant to challenge those states and their policies. If current trends continue, Russia will be providing more than 50% of Europe’s natural gas supplies by 2020. Even today, Germany imports 40% of its gas from Russia; Italy, 30%; and France, 25%. Central and Eastern Europe are in some cases even more dependent. Slovakia gets all of its gas from Russia; Bulgaria, 94%; Lithuania, 84%; Hungary, 80%; and Austria, 74%. At a minimum, this situation makes it more difficult for the United States to build an international consensus for taking a tougher line toward countries such as Iran and Russia.

The most immediate obstacle to taking a tougher line with Russia, however, is the growing power of Gazprom, the Russian energy company in which the Russian government has a controlling interest. The operator of the world’s largest network of gas pipelines and the world’s largest producer of natural gas, Gazprom is assiduously working to expand its preeminence into a position as close to a monopoly as possible.

Russia and Iran have almost half of the world’s natural gas reserves… Higher demand for gas at today’s higher prices will provide vast new revenues for those states and help sustain very problematic governments.

Gazprom’s strategy for accomplishing this goal is straightforward. To obtain the resources to develop its energy reserve holdings in Russia and increase production, it has allowed foreign entities to buy its shares and is inviting non-Russian companies to help develop untapped or underdeveloped fields. However, it is doing so in ways that ensure that Moscow still has the controlling interest. Combined with the revenues produced by its pipeline operations and the quasi-liberalization of its rules on stock holdings, Gazprom’s market capitalization stands at approximately $200 billion. Flush with cash, Gazprom is now in the business of trying to buy pipeline networks outside of Russia. In fact, as the European Union pushes its members and prospective members to divest themselves of state-controlled energy companies and to liberalize more generally, Gazprom is moving to buy up pipeline assets or gain a substantial foothold in European energy companies. In short, Brussels’s desire to create a more open market in the energy sector is being used by Moscow as an opportunity to extend its control over the distribution system for natural gas.

Does that matter? Moscow clearly thinks it does. Well before Vladimir Putin appeared on the stage as a possible Russian president, he was writing that the key to Russia “regaining its former might” was its role as a provider of natural resources to the rest of the developed and developing world. As president, Putin halted plans by Kremlin liberals to break up Gazprom’s monopoly inside Russia and instead appointed cronies as the company’s chief operating officers.

Expanding domestic supplies, creating a global market, and countering Russian efforts to create a dominant market position are critical to reducing our dependence on foreign gas.

If nothing else, Gazprom’s profits provide the Kremlin with an enormous slush fund that is outside the official Russian state budget. This is made even easier by Gazprom’s habit of partnering with shadow companies whose underlying ownership remains opaque but that are suspected of having ties to the Russian mafia and Russian intelligence. Such arrangements also make it possible to feed funds to Russian and non-Russian politicians and government officials alike.

As the past winter’s events have made clear, Putin’s use of Gazprom is not always so subtle. On New Year’s Day, Gazprom cut off Ukraine’s gas supplies. Not long after that, the major gas pipeline feeding Georgia mysteriously blew up. In both cases, two young democracies, both looking to the West, had frustrated Gazprom’s efforts to get control of their pipeline assets. (Ukraine’s pipeline is the main route for transporting gas to Europe, so the cutoff to Ukraine affected Europe’s supplies as well.) Russian officials argued that Ukraine was paying far less than the global market price for the natural gas it took from the pipeline. Moscow, of course, had little to say about the fact that Gazprom was providing Belarus and its pro-Russian leader, Alexander Lukashenko, gas at even lower prices. In the end, facing cutoffs and/or massive price hikes only weeks before elections, Ukraine’s leaders cut a deal that keeps prices for the moment relatively low in exchange for a tangled web of corporate arrangements that gives Moscow a stake in Ukraine’s pipeline system and allows billions of dollars to be siphoned off to a mysterious Swiss company (RosUkrEnergo). To keep the gun to Kiev’s head, the deal also gives Moscow and Gazprom the right to trigger another gas crisis by renegotiating the price Ukraine pays for natural gas after only a few months. Moscow might not have much of a conventional military to threaten its neighbors anymore, but Putin clearly believes he has found another tool to wield influence beyond Russia’s borders.

Reducing Gazprom’s market power

Many in Europe, reacting to rising global demand and the uncertainty of supply exemplified by Gazprom’s hardball approach to Ukraine, appear willing to grant Gazprom concessionary rights on European energy infrastructure and to sign long-term contracts with the company. Although from one perspective this might appear to satisfy Europe’s energy security needs, in the process it further solidifies Russia’s dominant hand in the field. Moreover, it ignores the fact that when Moscow has had a dominant hand to play with its neighbors in the past with oil or gas, it has not been hesitant about playing it. Would it try this with Europe? No one knows. However, since the flare-up over Ukraine, Moscow has done little to reassure European capitals, threatening to take its gas supplies elsewhere—to China—if the Europeans continue to balk about Gazprom acquisitions in Europe and continue to insist that Russia liberalize its own internal energy market. At a minimum, we do know that with Gazprom having this advantage, Moscow will not be any easier to deal with. Nor will it make our European allies eager to challenge Russian misbehavior on other fronts.

There are steps that Europe can take to lessen Gazprom’s market power and, in turn, Moscow’s leverage. First, Russia’s goal of acceding to the World Trade Organization should be explicitly tied to Moscow’s ratifying the 1994 Global Energy Charter for Sustainable Development. The treaty, among other things, would mandate a Russian commitment to promote “an open and competitive” energy market and, in particular, would require Gazprom to open its network of pipelines to independent gas producers. Second, Gazprom’s own oil and gas fields are in decline; most of the gas Gazprom provides to Russian citizens and its European customers comes from non-Russian sources in central Asia. To develop its untapped reserves in Russia, Gazprom will need to draw on the technological and financial resources of the West. The quid pro quo for providing those resources should not simply be an equity share in the revenues generated down the line but a G-8 negotiated and enforced commitment on the part of Moscow to create a truly transparent and market-based energy sector. Finally, European countries should rethink their tendency to sign long-term deals with Gazprom. Instead, they should focus on two initiatives: first, creating new pipeline infrastructure to move central Asian gas to Europe without Russian involvement; and second, adding new LNG facilities to support imports from West Africa and the Middle East. As we have seen in other cases such as the Baku-Tiblisi-Ceyhan oil pipeline, once Moscow is confronted with the fact that it is no longer in a dominant market position, Western companies will find it less difficult to negotiate competitive contracts with Gazprom and the other Russian energy giants.

Increasing U.S. supplies

For the time being, the U.S. government should make it a priority to support a tougher, smarter line by its European partners to counter Russia’s attempt to build a monopoly on gas supply and distribution. As for its own energy security, the solution in the short term is straightforward: increase supplies of natural gas and expand the infrastructure to deliver it to consumers. In both cases, however, politics have prevented the United States from moving forward.

The United States has plenty of natural gas reserves. The government’s Energy Information Agency estimates (conservatively) that there are roughly 1,300 trillion cubic feet of recoverable natural gas resources in the United States alone. That is sufficient to take care of U.S. demand for 50 to 75 years, depending on the growth in demand. But by severely restricting or simply banning drilling access to gas fields in the Rockies, the Artic, the eastern Gulf, and the outer continental shelf in both the Atlantic and Pacific Oceans, Washington has artificially created a supply shortage.

Most of the restrictions or bans are tied to environmental concerns. However, in “green” Canada and Norway, new technologies developed for land and offshore drilling have shown that natural gas exploration and extraction need not cause significant environmental problems. If nothing else, the federal government should begin to gradually lift restrictions on new exploration, test the environmental impact, and if negligible, move forward with further development of untapped gas reserves.

Over the long term, however, energy security with respect to natural gas for both the United States and its allies will be tied to the rise of a global market for natural gas. With abundant supplies worldwide, a global competitive market would provide a diversification of supplies that should be the cornerstone of any energy security policy. As then–First Lord of the Admiralty Winston Churchill remarked on the risks involved in his decision to shift the British fleet’s principal fuel from coal to oil: “Safety and certainty in oil lie in variety and variety alone.”

Although a global market in natural gas appears to be on the horizon, we are not there yet. The gas market today consists mainly of three distinct major regional markets—East Asia, Europe, and North America—whose supply chains are also largely distinct from each other. The key to changing this will be an expansion in worldwide LNG tanker carrying capacity, LNG plants, and receiving terminals. As a strictly economic matter, this looks likely. The costs associated with LNG production and transport have dropped by about 30% during the past few years. Once a very costly way of moving and obtaining gas, LNG is now a moneymaker at prices well below today’s current price levels for natural gas. (Some estimates have the United States overtaking Japan as the world’s leader in LNG imports, with some 20% to 25% of U.S. gas consumption being fed by LNG by 2020.) In theory, with an LNG global supply system being the linchpin, the natural gas market could become a commodity market in which prices are kept as low as economically possible in light of actual demand, and in which a diversified set of suppliers reduces the ability of one supplier to manipulate the market over time.

However, for LNG to play its role in helping to develop a global gas market, a significant expansion in LNG port and regasification facilities will be needed, especially in the United States. Although there are proposals to substantially increase the number of regasification terminals in the United States and federal regulators have made it somewhat easier to move these proposals along, there remains strong resistance among local communities and the states to allowing LNG sites. Environmental and post-9/11 security concerns have driven the debate. Although LNG “has a proven safety record with 33,000 carrier voyages covering 60 million miles with no major accidents over a 40-year history,” according to a National Petroleum Council report, the federal government will have to increase public confidence in the safety of natural gas facilities from accidents or terrorist attacks by mandating increased government scrutiny if today’s thin public support for expanding LNG infrastructure is to be overcome.

Finally, although the creation of a global natural gas market would enhance U.S. national security, a just-in-time supply of gas, like the global market for oil, will be vulnerable to spot disruptions either for political reasons (unrest in a supplier country) or environmental causes (such as hurricanes). In the past, long-term contracts from individual suppliers to specific country consumers essentially isolated the problem of dips in supply and price spikes. In a future integrated global market, disruptions or discontinuities in supply or demand will have global effects. As a result, “gas users in Japan, for instance, will have a vested interest in the stability of South American gas reaching the U.S. West Coast . . . and the European Union will be compelled to monitor the political situation in gas-producing regions as remote as the Russian Far-East and Venezuela,” points out a 2005 report from the James A. Baker Institute for Public Policy at Rice University. To mitigate this potential problem, governments will need to store reserves of natural gas, as the United States does oil in its Strategic Petroleum Reserve, in order to provide a margin of safety against severe disruptions in supplies. Taking this and the other steps outlined above should prevent the pending crisis in natural gas supply and improve not only U.S. energy security but its security interests as well.

Import Ethanol, Not Oil

To paraphrase Mark Twain, people talk a lot of reducing U.S. dependence on imported oil, but they don’t do much about it. Rather than continuing to talk the talk, the United States has a unique window of opportunity to walk the walk. The $2-plus per gallon gasoline prices and our Middle East wars have made the public and Congress acutely aware of the politics of oil and its effects on our national security. With every additional gallon of gasoline and barrel of oil that the nation imports, the situation becomes worse.

Our analysis shows that the United States can have a gasoline substitute at an attractive price with little infrastructure investment and no change to our current fleet of cars and light trucks. By 2016, the United States could produce and import roughly 30 billion gallons of ethanol from corn, sugar cane, and grasses and trees, lowering gasoline use dramatically. Furthermore, the United States could encourage the European Union, Japan, and other rich nations to raise their ethanol production at home and in developing nations by a similar amount. Such increased production, together with improvements in vehicle fuel economy, would result in a notable decrease in petroleum demand, with positive implications for oil prices and Middle Eastern policy. This move would have the added benefit of supporting sustainable Third World development and reducing problems of global warming, because burning ethanol can result in no net carbon dioxide emissions into the atmosphere.

Committing to ethanol

The growing U.S. appetite for petroleum, together with demand growth in China, India, and the rest of the world, has pushed prices to new highs. The United States uses over 20 million barrels of petroleum per day, of which 58% is imported. Prices rose to almost $70 per barrel (bbl) in August 2005. The petroleum futures market is betting that the price will be $67 per bbl in December 2006 and remain well above $60 per bbl through 2012, presumably rising after that. Feeding our oil habit results in oil spills, air and water pollution, large quantities of emissions of greenhouse gases, and increased reliance on politically unstable regions of the world.

Although no one can predict the future with confidence, increasing worldwide petroleum demand will push prices higher over the next few decades. There is little public appetite for high gasoline taxes to decrease consumption or for forcing greater fuel economy on the U.S. light-duty fleet, but there is general recognition that we cannot continue to stick our heads in the sand.

Sensible policy requires that the United States both reduce the amount of energy used per vehicle-mile and substitute some other fuel or fuels for gasoline. The Bush administration plans to accomplish the latter, eventually, with hydrogen-powered vehicles. We are skeptical. The plans envisioned by even optimistic hydrogen proponents would, for decades to come, leave the nation paying ever-higher petroleum prices, continuing to damage the environment, and constraining foreign and defense policies to protect petroleum imports. Putting all our eggs in the hydrogen basket would require large investments and commit us to greater imports, higher prices, and greater dependence on the Persian Gulf until (and if) an attractive technology was developed and widely deployed.

A better alternative is for the nation to increase its use of ethanol as a fuel. In his 2006 State of the Union address, President Bush gave some support to ethanol, although he continued to place heavy emphasis on the promise of hydrogen. The president declared that the government would fund additional research in cutting-edge methods of producing ethanol from corn and cellulosic materials and vowed that his goal was to make ethanol “practical and competitive within six years.”

Unfortunately, Congress traditionally has viewed ethanol as a subsidy to corn growers rather than as a serious way to lower oil dependence. The Energy Policy Act of 2005 requires an increasing volume of renewable transportation fuel to be used each year, starting in 2006 and ultimately rising to 7.5 billion gallons of ethanol in 2012. Although this increase would raise the incomes of the corn producers and millers, it would not even keep up with the increases in the nation’s gasoline demand and so would not reduce crude oil imports. Gasoline use grows at a little more than 1% per year, about 1.4 billion gallons per year. By 2012, the United States would need to be using 13 billion gallons of ethanol merely to keep gasoline use constant. To reduce oil imports, the nation must achieve major increases in fuel economy and ethanol use.

SENSIBLE POLICY REQUIRES THAT THE UNITED STATES BOTH REDUCE THE AMOUNT OF ENERGY USED PER VEHICLE-MILE AND SUBSTITUTE SOME OTHER FUEL OR FUELS FOR GASOLINE.

The path to this goal starts today: The nation should start moving, as rapidly as ethanol supplies become available, to the widespread use of E20: a mixture of 20% ethanol and 80% gasoline. Every car built in the past three decades can use E10 and likely E20 without modification. For 2004, roughly 30 billion gallons of ethanol would have been needed to have the entire fuel stock be E20. Unfortunately, ethanol production and imports are only 13% of that amount today.

If the ethanol were available, the nation could substitute perhaps 80 billion gallons of ethanol for gasoline by 2016 by increasing the 4 million “flexible-fueled” vehicles that can use a mixture containing anywhere from 0 to 85% ethanol. If all new vehicles were flexible-fueled (for a cost of less than $200 per vehicle), the market for ethanol would grow by 8 billion gallons per year.

The primary barrier to producing and importing 30 to 80 billion gallons of ethanol in 2016 is the reluctance of the public and Congress to commit to an ethanol future. Thirty billion gallons of ethanol is more than the nation’s corn growers can provide. Cellulosic ethanol is an appealing approach to the problem, one that we have previously written about. Even with all of its potential, development has been painfully slow. The construction of the first commercially operating U.S. plant is 3 to 5 years away. Learning from that plant, designing a second generation and learning from that, and then building a commercial fleet of plants with U.S. technology will take a decade.

Looking south

But there is a promising shortcut that permits immediate access to substantial amounts of ethanol. The United States could address its oil use now, while the cellulosic ethanol industry develops.

We recently traveled to Brazil and saw a developing industry producing ethanol as a motor vehicle fuel. The Brazilians flock to this fuel because it is cheaper than gasoline. Current law requires that the gasoline sold be E25: 25% ethanol blended with 75% gasoline. Brazilians are lining up to buy newly developed flexible-fueled vehicles that can burn fuels ranging from E20 to E100 (actually, the hydrated ethanol contains 95% ethanol and 5% water). With such a flexible vehicle, a driver can buy whatever fuel is cheapest.

Brazil, together with some Caribbean nations, is exporting some 200 million gallons of ethanol to the United States annually. But the United States doesn’t make it easy. Brazil pays a 2.5% duty and doesn’t receive the 51 cent per gallon excise tax rebate that U.S. producers receive. The Caribbean nations are subject to a quota. Removing these trade barriers would make imported ethanol more attractive. Such a policy would not penalize U.S. farmer or producers, because total ethanol needs can accommodate all domestic production and imports. Still, the Bush administration remains opposed to eliminating or reducing the duty.

Even so, Brazil is expanding its domestic and export markets for ethanol. Currently Brazil has 370 sugar mills and distilleries, which are forecasted to produce over 4 billion gallons of ethanol this year. An additional 40 mills and distilleries are under construction, with the goal of essentially ending gasoline imports and exporting perhaps 15 billion gallons per year in a decade. According to some estimates, efficient Brazilian producers now make ethanol at a cost of roughly 72 cents per gallon. Our examination of the sugar cane harvesting and mills convinced us that Brazil could lower production costs substantially below that level.

In addition, the Brazilians are thinking seriously about even greater ethanol production from sugar cane and agricultural wastes. One university study is examining how Brazil could replace 10% of the world’s gasoline with ethanol (25 to 30 billion gallons) without clearing more rainforests and by doing less harm to the environment than current agriculture. Brazil is also making notable progress in producing ethanol from bagasse, the fibrous residual left after all the sugar is extracted from sugar cane. At least one pilot plant is making bagasse-derived ethanol, and there are plans for a full-scale plant.

The time is right for the United States to adopt policies aimed at expanding ethanol production and use. U.S. corn growers claim that they could possibly produce 15 billion gallons in a decade. Brazil seems ready and able to export another 15 billion gallons at $1 per gallon. At the same time, we should pursue technologies to produce ethanol from biomass at ever-lower costs. Some proponents claim that cellulosic ethanol could ultimately replace all gasoline use in the United States.

The technology for making ethanol from cellulose (grasses and trees) being developed in Brazil, the United States, and Canada, will enable many nations to grow energy crops to produce ethanol. This could be a significant cash crop for developing nations. Growing energy crops around the world has the potential for displacing perhaps half of the world’s gasoline demand. The result of cellulsoic ethanol development would be good for U.S. agriculture, by expanding available cash crops; for agricultural soils, by reducing fertilizer and pesticide use and increasing soil fertility; and for the ecology more generally, by providing habitat. The same would be true for farms in many nations, both rich and developing.

The key point is that U.S. actions to expand both domestic corn production and the importation of ethanol from Brazil would serve to develop the necessary infrastructure and incentives to bring cellulsoic ethanol to reality more rapidly. Thus, we see no downside risk to eliminating ethanol tariffs and promoting imports as the United States expands its own ethanol production. This strategy would complement policies to increase vehicle fuel economy. We see no losers—with the exception of OPEC—from this policy, and tremendous gain for the United States.

Let the Internet Be the Internet

Now that the Internet has become a keystone of global communications and commerce, many individuals and institutions are racing to jump in front of the parade and take over its governance. In the tradition of all those short-sighted visionaries who would kill the goose who lays the golden eggs, they seem unable to understand that one reason for the Internet’s success is its unique governance structure. Built on the run and still evolving, the Internet governance system is a hearty hybrid of technical task forces, Web-site operators, professional societies, information technology companies, and individual users that has somehow helped to guide the growth of an enormous, creative, flexible, and immensely popular communications system. What the Internet does not need is a government-directed top-down bureaucracy that is likely to stifle its creativity.

The call to “improve” Internet governance was heard often at the United Nations (UN)–organized November 2005 World Summit on the Information Society (WSIS) in Tunis, which was a followup to the December 2003 summit in Geneva. Although many different topics were on the agenda in Geneva and Tunis, by far the largest amount of controversy (and press coverage) was generated by debates over Internet governance. The summit participants had very different ideas about how the Internet should be managed and who should influence its development. Many governments were uncomfortable with the status quo, in which the private companies actually building and running the Internet have the lead role. One hot-button issue was the management of domain names, which today is overseen by the International Corporation for Assigned Names and Numbers (ICANN), an internationally organized nonprofit corporation. A number of countries feel that the U.S. government exerts too much control over ICANN through a memorandum of understanding between ICANN and the U.S. Department of Commerce. As a result, a number of proposals were put forward to give governments and intergovernmental organizations, such as the UN, more control over the domain-name system.

But the debate over ICANN was just part of a much bigger debate over who controls the Internet and the content that flows over it. At the Geneva Summit, a UN Working Group on Internet Governance (WGIG) was created to examine the full range of issues related to management of the Internet, which it defined as “the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.”

This definition would include the standards process at organizations such as the Internet Engineering Task Force (IETF), the International Telecommunication Union (ITU), and the World Wide Web Consortium (W3C), as well as dozens of other groups; the work of ICANN and the regional Internet registries that allocate Internet protocol addresses; the spectrum-allocation decisions regarding WiFi and WiMax wireless Internet technologies; trade rules regarding e-commerce set by the World Trade Organization; procedures of international groups of law enforcement agencies for fighting cybercrime; agreements among Internet service providers (ISPs) regarding how they share Internet traffic; and efforts by multilateral organizations such as the World Bank to support the development of the Internet in less developed countries. (A very useful summary of the organizations shaping the development and use of the Internet has been created by the International Chamber of Commerce at http://iccwbo. org/home/e_business/Internet%20governance.asp.)

The main reason why the Internet has grown so rapidly and why so many powerful applications can run on it is because the Internet was designed to provide individual users with as many choices and as much flexibility as possible, while preserving the end-to-end nature of the network. And the amount of choice and flexibility continues to increase. Because there are competing groups with competing solutions to users’ problems, users, vendors, and providers get to determine how the Internet evolves. The genius of the Internet is that open standards and open processes enable anyone with a good idea to develop, propose, and promote new standards and applications.

THE FARMER IN CENTRAL AFRICA, THE TEACHER IN THE ANDES, OR THE SMALL MERCHANT IN CENTRAL ASIA DOES NOT CARE ABOUT WHERE ICANN IS INCORPORATED OR HOW IT IS STRUCTURED.

The governance of the Internet has been fundamentally different from that of older telecommunications infrastructures. Until 20 to 30 years ago, governance of the international telephone system was quite simple and straightforward. Governments were in charge. In most countries, they either ran or owned the monopoly national telephone company. Telephone users were called “subscribers,” because like magazine subscribers they subscribed to the service offered at the price offered and did not have much opportunity to customize their services. When governments needed to cooperate or coordinate on issues related to international telephone links, they worked through the ITU.

The model for Internet governance is completely different. At each level, there are many actors, often competing with each other. As a result, users—not governments and phone companies—have the most influence. Hundreds of millions of Internet users around the world make individual decisions every day, about which ISP to use, which browser to use, which operating systems to use, and which Internet applications and Web pages to use. Those individual decisions determine which of the offerings provided by thousands of ISPs, software companies, and hardware manufacturers succeed in the marketplace and thus determine how the Internet develops. Users’ demands drive innovation and competition. Governments already have a powerful influence on the market because they are large, important customers and because they define the regulatory environment in which companies operate. Because the Internet is truly global, there is a need for coordination on a range of issues, including Internet standards, the management of domain names, cybercrime, and spectrum allocation. But these different tasks are not and cannot be handled by a single organization, because so many different players are involved. Another difference is that unlike the telephony model, where a large number of telephony-related topics (such as telephone technical standards, the assignment of telephone country codes, and the allocation of cellular frequencies) are handled by the ITU, an intergovernmental organization, most international Internet issues are dealt with by nongovernmental bodies, which in some cases are competing with each other.

In many ways, the debate over ICANN and the role of governments in the allocation of domain names can be seen as a debate between these two different models of governance: the top-down telephony model and the bottom-up Internet model. In the old telephony model, the ITU, and particularly the government members of the ITU, determined the country codes for international phone calls, set the accounting rates that fixed the cost of international phone calls, and oversaw the setting of standards for telephone equipment. National governments set telecommunications policies, which had a huge impact on the local market for telephone services and on who could provide international phone service.

Today, Internet governance covers a wider range of issues, and for most of these issues the private sector, not governments, have the lead role. In contrast to telephony standards, which are set by the ITU, Internet standards are set by the Internet Engineering Task Force, the World Wide Web Consortium, and dozens of other private-sector–led organizations, as well as more informal consortia of information technology companies. However, some members of the ITU, as part of its Next Generation Networks initiative, are suggesting that the ITU needs to develop new standards to replace those developed at the IETF and elsewhere.

Likewise, the ITU is not content to have the price of international Internet connections determined by the market. For more than seven years, an ITU working group has been exploring ways in which the old accounting rates model for telephony might be adapted and applied to the Internet. Ironically, the ITU pricing mechanism has already had an effect on the Internet. Exorbitant international phone rates, which can be more than a dollar per minute in some countries, have given a big boost to the use of voice over Internet protocol (VOIP) services, which allow computer users to make phone calls without paying per-minute fees. During the time that the ITU has been discussing ways to regulate the cost of international Internet connection, in most markets the cost of international broadband links has plummeted by 90 to 95%. This apparently was not good enough for many WSIS participants, who insisted that regulation was needed to bring down user costs.

ALL WHO CARE ABOUT THE INTERNET NEED TO WORK TOGETHER TO FIND WAYS TO STRENGTHEN THE BOTTOM-UP MODEL THAT HAS SERVED THE INTERNET AND THE INTERNET COMMUNITY SO WELL.

WSIS participants also offered a number of proposals to have governments and the ITU take a larger role in regulating the applications that run over the Internet. For instance, several governments called for regulatory action to fight spam and digital piracy, protect online privacy, enhance consumer protection, and improve cybersecurity. Of course, Internet users and managers are addressing all these issues in a variety of ways, and a robust market exists for security tools and services. As a result, users have many options from which to select what works best for them. In contrast, some governments are talking about the need for comprehensive, one-size-fits-all solutions to spam, digital rights management, or cybercrime. Imposing this kind of rigid top-down solution on the Internet would have the undesirable side effect of “freezing in” current technological fixes and hindering the development of more powerful new tools and applications. Even more disturbing, in many cases the cure would be worse than the disease, because solutions proposed to limit spam or fraudulent content could also be used by governments to censor citizens’ access to politically sensitive information.

The debate over Internet governance is really about the future direction of the Internet. One outcome of the Tunis Summit was the creation of the Internet Governance Forum (IGF), a multistakeholder discussion group that will examine how decisions about the future of the Internet are made. Those advocating a greater role for governments in managing the Internet will continue to press their case at the IGF. The debate over ICANN will provide the first indication of where the discussion is heading. If a large majority of governments decide that ICANN should be replaced by an intergovernmental body or that government should have more say in ICANN decisionmaking, we can expect to hear more calls for greater government regulation in a wide range of areas, from Internet pricing to content control to Internet standards.

Fortunately, the Tunis Summit also exposed many government leaders to a broader understanding of how the Internet is governed and how it can contribute to the well-being of people throughout the world. They learned that the ICANN squabble is a relatively minor concern among the challenges that confront the Internet. The farmer in central Africa, the teacher in the Andes, or the small merchant in Central Asia does not care about where ICANN is incorporated or how it is structured. But they care about the cost of access and whether they can get technical advice on how to connect to and use the Internet. They care about whether the Internet is secure and reliable. They care about whether there are useful Internet content and services in their native language. And in many countries, they care about whether they’ll be thrown in jail for something they write in a chat room.

As the national governments, companies, nongovernmental organizations, and others involved in WSIS work to achieve the goals agreed to in Tunis, they should use the organizations that are already shaping the way the Internet is run. The existing Internet governance structure has repeatedly demonstrated its capacity to solve problems as they arise. Rather than discarding what has proven successful, world leaders should be trying to understand how it has succeeded, explaining this process to stakeholders and the public so that they can be more effective in participating in the process, and using the lessons of the past in approaching new problems. For instance, the IETF has set many of the fundamental standards of the Internet, and it is in the best position to build on those standards to continue improving Internet performance. As more people want to participate in standard setting, the IETF needs to explain to the new arrivals how it operates. To help in this effort, the Internet Society has started a newsletter to help make the IETF process more accessible and to invite input from an even larger community. The IETF is open to all. It is in not even necessary to come to the three meetings that the IETF holds each year, because much of the work is done online.

Other Internet-related groups are also eager to find ways to ensure that their work and its implications are understood and supported by the broadest possible community. They should follow the IETF example by making standards and publications available for free online and by publishing explanations of what they do in lay language. They could convene online forums where critical issues are discussed and where individuals and government representatives could express their views. As part of the preparation for the June 2005 World Urban Forum in Vancouver, the UN staged HabitatJam, a three-day online forum that attracted 39,000 participants. It could certainly do the same for Internet issues.

Ten or 15 years ago, when the Internet was still mostly the domain of researchers and academics, it was possible to bring together in a single meeting most of the key decisionmakers working on Internet standards and technology as well as the people who cared about their implications. That is no longer possible, except by using the Internet itself. The Internet Society is already starting to reach out to other organizations to explore how such public events could be organized.

Before trying to reinvent Internet governance, those who are unhappy with some Internet practices or who see untapped potential for Internet expansion should begin by using the mechanisms that have proved effective for the past two decades. The Internet continues to grow at an amazing pace, new applications are being developed daily, and new business models are being tried. The current system encourages experimentation and innovation. The Internet has grown and prospered as a bottom-up system. A top-down governance system would alter its very essence. Instead, all who care about the Internet need to work together to find ways to strengthen the bottom-up model that has served the Internet and the Internet community so well.

Two years ago, at a meeting of the UN Information and Communication Technologies Task Force in New York, Vint Cerf, the chairman of ICANN, said, “If it ain’t broke, don’t fix it.” Some people have misinterpreted his words to mean that nothing is wrong and nothing needs to be fixed. No one believes that. We have many issues to address. We need to reduce the cost of Internet access and connect the unconnected; we need to improve the security of cyberspace and fight spam; we need to make it easier to support non-Latin alphabets; we need to promote the adoption of new standards that will enable new, innovative uses of the Internet; and we need better ways to fighting and stopping cybercriminals.

The good news is that we have many different institutions collaborating (and sometimes competing) to find ways to address these problems. Many of those institutions—from the IETF to ICANN to the ITU—are adapting and reaching out to constituencies that were not part of the process in the past. They are becoming more open and transparent. That is helpful and healthy, but we need to continue to strive to make it better. In particular, it would be very useful if funding could be found so that the most talented engineers from the developing world could take more of a role in the Internet rulemaking bodies, so that the concerns of Internet users in those countries could be factored into the technical decisions being made there.

The debate about the future of the Internet should not begin with who gets the impressive titles and who travels to the big meetings. It should begin with the individual Internet user and the individual who has not yet been able to connect. It should focus on the issues that will affect their lives and the way they use the Internet. Most of them do not want a seat on the standards committees. They want to have choice in how they connect to the Internet and the power to use this powerful enabling technology in the ways that best suit their needs and conditions.

For What the Tolls Pay: Fair and Efficient Highway Charges

Hydrogen cars, expensive oil, fuel efficiency standards, and inflation frighten those interested in maintaining and improving U.S. highways. All of these forces could erode the real value of fuel taxes that now are the largest single source of funding for highway programs and an important source of transit funding as well. Because of this worry, the Transportation Research Board convened a committee to carefully examine the future of the fuel tax.

The committee uncovered both good and bad news. The good news is that there is nothing structurally wrong with the fuel tax that will cause the real value of revenues to decline dramatically over the next couple of decades. The bad news is that it is a very crude way to raise revenues for our highway system. Switching to per-mile fees, the committee concluded, would be a much more efficient and equitable approach.

Looking at the good news first, worries that alternative fuels and improving fuel efficiency will undermine the finance system are definitely exaggerated. Radical improvements in efficiency will take a long time to develop and be implemented, and even less radical improvements, such as hybrid engines, affect fuel consumption very slowly because it takes so long for new models to replace old models in the U.S. car fleet. Moreover, Americans are addicted to oil partly because they are addicted to power. If you make an engine more efficient, they will want it bigger. Consequently, improving technology does not reduce real fuel tax revenues per vehicle mile nearly as much as one might think. Indeed, they have been roughly constant for a long time.

One cannot be quite as certain regarding the future price of oil. There is some possibility that demand may erode because of an upward trend in the price of gasoline. Department of Energy projections (which have been generally consistent with those from other prominent sources) are optimistic that the price of oil will not surge over the next 15 years or so. But it must be admitted that energy experts did not anticipate the recent price increase to over $60 per barrel.

However, the evidence strongly suggests that recent oil price increases are as much the result of geopolitical forces as they are the result of fundamental supply shortages. It is true that China and India are becoming major oil consumers as they grow rapidly, but it is also true that supplies are increasing. There may be limited supplies of the type of oil that we pump from the ground today, but as one expert puts it, the sources of oil will just become heavier and heavier. If light crude runs out, we’ll turn more to heavy crude. If that becomes scarce, tar sands will be exploited more fully, and if they become expensive, we’ll turn to oil shale. In the process, oil will become more expensive, but it will be a slow process. Of course, wars, boycotts, and other disturbances can cause major price spurts that make optimistic forecasts look foolish, but one has no choice but to base long-run forecasts on fundamental trends, and they are not alarming.

The imposition of severe fuel efficiency standards could upset the gasoline-powered apple cart, but new radical regulation seems politically implausible in the near future. Currently, our two political parties are so closely competitive that no one wants to ask the American people to make major sacrifices. We may be addicted to oil, as the president suggests, but as Mae West remarked, “Too much of a good thing can be wonderful.”

Inflation concerns

The possibility of accelerating inflation raises more of a political as opposed to a technical concern. The federal fuel tax is a unit tax. That is to say, it does not vary with the price of gasoline as would a percentage sales tax. Inflation therefore erodes the purchasing power value of the tax. Some, like the Chamber of Commerce (in the National Chamber Foundation’s 2005 report Future Highway and Public Transportation Finance), have suggested indexing the tax for inflation. However, that solution may not be politically sustainable. Politicians at the state and local levels often suspend indexing if it becomes the least bit painful.

AN IMPROVED PRICING SYSTEM NOT ONLY HAS THE POTENTIAL FOR GREATLY INCREASING THE EFFICIENCY OF USING EXISTING ROADS, IT CAN ALSO BE HELPFUL IN GUIDING THE ALLOCATION OF NEW HIGHWAY INVESTMENT.

Historically, federal and state politicians have compensated for inflation by periodically raising tax rates. There is some question whether this is possible in the severe antitax climate in which we live today, but if this is a problem, it has nothing to do with the basic structure of the fuel tax. It is a political problem afflicting all forms of taxation.

But it should also be noted that politicians have not been strongly pressured by inflation in recent years. First, the inflation rate has been extremely low by historical standards. Second, at the federal level, the government has been able to capture additional revenues for the highway system without raising tax rates. In 1993, the federal gas tax was increased for the express purpose of reducing the deficit. The proceeds were not to be spent on highways or anything else. In 1997, those revenues were redirected into the highway trust fund and are now available to finance highway expenditures. More recently, an ethanol subsidy that was previously financed out of the highway trust fund will, in the future, be financed out of general revenues, thus releasing more resources for highways.

Congress may now have run out of such devices for increasing federal highway funding, which supports about a quarter of all highway spending. It will be interesting to see how Congress reacts in the future, especially if inflation accelerates a bit. In addition, many think that the most recent federal highway bill will more than spend the earmarked revenues that are available, although this is a controversial issue. If true, that, along with more inflation, may pressure Congress to return to its historical practice of occasionally raising the fuel tax when the federal highway program is reauthorized.

Per-mile fees

Although there are few reasons to fear a rapid erosion of fuel tax revenues in the near future, major revenue increases also seem unlikely. Congress and the state legislatures could raise more revenue with the gas tax if they chose to do so, but the political opposition is formidable. That makes it unlikely that enough will be spent in the near future to improve highway quality significantly, and the nation will have to continue to live with the current level of congestion. But relying solely on increased highway expenditures to reduce congestion is probably not cost-effective. Congestion must also be attacked by imposing extra costs on those who cause it.

Whether the nation just wants to maintain the quality of the current system or to improve it, there is good reason to reform our current approach to financing. In searching for alternatives, there is a strong argument for sticking with the established principle that users should pay and that the resulting revenues should be dedicated to highway expenditures. The revenues collected should be related to the costs that the vehicle imposes on the system, including congestion costs. In an extreme version of the principle, all the revenues and no more should be spent on highways, but the present practice of dedicating some revenues to mass transit certainly is defensible, because mass transit expenditures benefit highway users by reducing congestion.

The current fuel tax is only vaguely related to the amount of wear and tear that a vehicle imposes on the road, and it does not vary with the level of congestion. Per-mile fees that vary with the type of vehicle and time of day would be much more efficient and equitable.

Fifteen years ago, it was not possible to think about collecting per-mile fees efficiently. Costs included constructing tollbooths, paying toll takers, and most important, waiting in line at the tollgate. New technology holds the promise of virtually eliminating such costs.

In the immediate future, developments such as the EZpass electronic toll collection system (used on many toll roads and bridges throughout the northeastern states) and license plate imaging greatly increase the opportunities for tolling at low cost. We should exploit these opportunities to the extent possible.

In the longer run, global positioning system (GPS) technology makes it theoretically possible to charge for every road in the country, with fees varying by type of vehicle and the level of congestion. Of course, we may never wish to go that far, and much research is necessary before committing to that path. It is necessary to determine what type of technology is most efficient and to develop safeguards that will assure the public that their privacy will be protected. It is also important to resolve the many problems that will arise as we move from the current system of financing to something completely new. The necessary technology is not costless to develop, but it is very cheap. It is possible that GPS systems will be installed in almost all new cars in the near future, even if they are not required for the purpose of levying a per-mile fee.

The president’s 2007 budget proposal agrees that new forms of highway funding are desirable. It requests $100 million for a pilot program to involve up to five states in evaluating more efficient pricing systems. The necessary research has already started with an experiment in Oregon, and the Germans have initiated a GPS system for levying fees on trucks on the Autobahn, the national motorway system.

An improved pricing system not only has the potential for greatly increasing the efficiency of using existing roads, it can also be helpful in guiding the allocation of new highway investment. If a certain segment of road is yielding revenues far in excess of the cost of building it, it is a pretty good indication that an expansion of capacity in the area is warranted. If, on the other hand, revenues are not sufficient to pay for costs, any request for new construction should be critically examined.

Although such a system holds the promise of implementing the economist’s dream of perfectly pricing the highway system, it would be naïve to believe that a perfect system could ever be implemented. The per-mile fees will be set by politicians operating in a political environment. There will be strong pressures to keep fees low just as there are pressures today to avoid fuel tax increases. In some cases, there will be legitimate arguments for subsidies. For example, the nation may choose to subsidize rural road networks much as it now subsidizes mail service to rural areas.

The equity argument

Many will question charging per-mile fees out of a concern that it will impose a special hardship on the poor. As the notion of charging for road use is discussed more and more, there are many derogatory comments about “Lexus lanes,” as though only the rich would benefit from a reduction in congestion. It can be noted that it is frequently extremely important for poorer people to get to work on time or to pick up their kids from childcare before overtime fees are charged. But such arguments do not resolve the problem. Some people will be worse off as the result of a per-mile fee, and some of the people who are worse off will be poor.

It is not uncommon to face tradeoffs between economic efficiency and a concern for equity. But there are better ways to protect the poor than to prevent a major improvement in the efficiency of our transportation system. If it is determined that fees particularly hurt the poor—and more research on this question is probably warranted, given that the poor also pay the current fuel tax—policies that make the earned income credit or other welfare programs more generous can be considered. If it is deemed desirable to target additional assistance more precisely on poor highway users, a toll stamp equivalent to the food stamp program might be contemplated, although administrative costs would be very high. It may not be worth it to try for very precise targeting. The basic point is that there are other ways to deal with poverty that are more efficient than not charging properly for roads.

Expanding tolling now would acquaint people with the concept. It is easier to start levying tolls on specific lanes when there are alternative lanes that are free. That will make the public aware of the benefits of congestion pricing. If there are no howls of anguish, politicians might be less inclined to oppose road pricing.

Many years ago, the economist William Vickery began extolling the virtues of per-mile fees that would vary with the level of congestion. Having been trained in engineering originally, he went so far as to provide detailed discussions of complex systems that would put wires under the street for the purpose of measuring the distance traveled by particular cars at different times of the day. He died tragically just before traveling to Stockholm to receive the Nobel Prize in economics. At the time, we were on the edge of developing new technology that could turn his dream into a practical reality at low cost. Wherever he is, he must be smiling.

The U.S. Energy Subsidy Scorecard

In his State of the Union address on January 31, 2006, President Bush called for more research on alternative energy technologies to help wean the country from its oil dependence. The proposal was not surprising: After all, R&D investment has long been a staple of government efforts to deal with national challenges.

Yet despite its prominent role in the national debate, R&D has constituted a relatively small share of overall government investment in the energy sector since 1950. According to our analysis, the federal government invested $644 billion (in 2003 dollars) in efforts to promote and support energy development between 1950 and 2003. Of this, only $60.6 billion or 18.7% went for R&D. It was dwarfed by tax incentives (43.7%).

Indeed, our analysis makes clear that there are diverse ways in which the federal government has supported (and can support) energy development. In addition to R&D and tax policy, it has used regulatory policy (exemption from regulations and payment by the federal government of the costs of regulating the technology), disbursements (direct financial subsidies such as grants), government services (federal assistance provided without direct charge), and market activity (direct federal involvement in the marketplace).

SURPRISES ABOUND. TAX SUBSIDIES OUTPACE R&D SPENDING.SOLAR R&D IS WELL FUNDED. OIL PRODUCTION IS THE BIG WINNER.COAL RECEIVES ALMOST AS MUCH IN TAX SUBSIDIES AS IT DOES FOR R&D. NUCLEAR POWER RECEIVES MUCH LESS THAN COAL FOR R&D.

We found that R&D funds were of primary importance to nuclear, solar, and geothermal energy. Tax incentives comprised 87% of subsidies for natural gas. Federal market activities made up 75% of the subsidies for hydroelectric power. Tax incentives and R&D support each provided about one-third of the subsidies for coal.

As for future policy, there appears to be an emerging consensus that expanded support for renewable energy technologies is warranted. We found that although the government is often criticized for its failure to support renewable energy, federal investment has actually been rather generous, especially in light of the small contribution that renewable sources have made to overall energy production. As the country maps out its energy plan, we recommend that federal officials pay particular attention to renewable energy investments that will lead to market success and a larger share of total supply.

The power of tax incentives

Policies that allowed energy companies to forego paying taxes dwarfed all other kinds of federal incentives for energy development. Tax policy accounted for $281.3 billion of total federal investments between 1950 and 2003, with the oil industry receiving $155.4 billion and the natural gas industry $75.6 billion.

Distribution of Federal Energy Incentives by Type, 1950-2003

Source: Management Information Services, Inc.

The dominance of oil

The conventional wisdom that the oil industry has been the major beneficiary of federal financial largess is correct. Oil accounted for nearly half ($302 billion) of all federal support between 1950 and 2003.

Distribution of Federal Energy Incentives among Energy Sources, 1950-2003

Source: Management Information Services, Inc.

Renewable energy not neglected

The perception that the renewable industry has been historically shortchanged is open to debate. Since 1950, renewable energy (solar, hydropower, and geothermal) has received the second largest subsidy—$111 billion (17%), compared to $63 billion for nuclear power, $81 billion for coal, and $87 billion for natural gas.

Federal R&D Expenses for Selected Technologies, 1976-2003

LEGEND: PV: Photovoltaic (renewable); ST: Solar Thermal (renewable); ANS: Advanced Nuclear Systems; CS: Combustion Systems (coal); AR&T: Advanced Research and Technology (coal);LWR: Light Water Reactor ☢; Mag: Magnetohydrodynamics (coal); Wind: Wind Energy Systems (renewable); ARP: Advanced Radioisotope Power Systems ☢.

Source: Management Information Services, Inc.

Cost/benefit mismatch

Considerable disparity exists between the level of incentives received by different energy sources and their current contribution to the U.S. energy mix. Although oil has received roughly its proportionate share of energy subsidies, nuclear energy, natural gas, and coal may have been undersubsidized, and renewable energy, especially solar, may have received a disproportionately large share of federal energy incentives.

Federal Energy Incentives through 2003 Compared to Share of 2003 U.S. Energy Production

Source: Management Information Services, Inc.

Skewed R&D expenditures

Recent federal R&D expenditures bear little relevance to the contributions of various energy sources to the total energy mix. For example, renewable sources excluding hydro produce little energy or electricity but received $3.7 billion in R&D funds between 1994 and 2003, whereas coal, which provides about one-third of U.S. energy requirements and generates more than half of the nation’s electricity, received just slightly more in R&D money ($3.9 billion). Nuclear energy, which provided 10% of the nation’s energy and 20% of its electricity, was also underfunded, receiving $1.6 billion in R&D funds.

Federal R&D Energy Expenditures, 1994-2003, Compared to 2003 U.S. Electricity Production

Source: Management Information Services, Inc.

Protecting the Best of the West

Once considered the leftovers of Western settlement and land grabs, the 261 million acres of deserts, forests, river valleys, mountains, and canyons managed by the federal Bureau of Land Management (BLM) are now in hot demand. Pressure to open more of these lands for oil and gas drilling has never been greater. Traditional uses of BLM lands, including logging, livestock grazing, and mining, continue. At the same time, expanding cities and suburbs juxtapose populations beside BLM lands as never before, and new technologies such as all-terrain vehicles make once-remote BLM lands widely accessible. Increasingly, the distinctive Western landscapes of BLM lands are a magnet for all who prize outdoor recreation—from hikers to off-road vehicle enthusiasts, from birdwatchers to hunters.

Congress, as well as past presidents and ordinary citizens, have realized (almost belatedly) that BLM lands are rich in unique characteristics that merit conservation: wildlife, clean water, cultural and historic relics, open space, awesome scenic vistas, and soul-nourishing solitude. In recognition of the need to protect the BLM lands with the greatest richness of natural and historical resources, the Clinton administration in 2000 designated 26 million acres as the National Landscape Conservation System (NLCS) to help keep these stellar areas “healthy, wild, and open.”

Now, conservationists of all stripes are watching the BLM closely. They ask: Can a federal agency historically attuned to maximizing resource development also address the challenge of conservation?

A recent assessment of the condition of the NLCS—and of the BLM’s stewardship of those lands—offers a litmus test. The Wilderness Society and the World Resources Institute jointly conducted the assessment and issued results in October 2005. Our report, State of the NLCS: A First Assessment, finds that the NLCS’s natural and cultural resources are at risk under the BLM’s oversight.

Fortunately, the assessment also offers good news: It is not too late for the BLM, the administration, and Congress to safeguard the public treasures of the NLCS. In order to ensure that the BLM becomes a model for conservation and scientific learning in some of the nation’s most special places, we recommend more funding and staffing, coupled with a commitment from leaders of the Department of the Interior, which oversees the BLM, to prioritize conservation on its premier Western lands. We also encourage a range of actions, including annual reporting and expanded volunteer programs, that would come at little cost to the agency or the federal budget.

From rags to riches

The federal government created the BLM in 1946 by combining the General Land Office and the Grazing Service. Today, the BLM manages more public land than the Park Service, Forest Service, or Fish and Wildlife Service. One-fifth of the land in states west of the Rocky Mountains falls under the BLM’s purview.

For decades, BLM lands were perceived as “the lands no one wanted” or areas most useful for cheap grazing and mineral extraction. Indeed, the BLM was known in some quarters as the “Bureau of Livestock and Mining.”

Yet, in fact, BLM lands are rich in a diversity of resources in addition to oil, gas, minerals, and rangeland.

Water. An estimated 65% of the West’s wildlife depends for survival on riparian areas: lush areas adjacent to waterways. The BLM administers 144,000 miles of riparian-lined streams and 13 million acres of wetlands.

Cultural resources. The BLM manages the largest, most diverse, and most scientifically important body of cultural resources of any federal land agency. Extensive evidence of 13,000 years of human history on BLM lands ranges from prehistoric Native American archaeological sites to pioneer homesteads from the 19th and early 20th centuries. With just 6% of BLM lands surveyed for cultural resources, 263,000 cultural properties have been discovered; archaeologists estimate there are likely to be 4.5 million sites on all BLM lands. The significance of and threats to these cultural resources were underscored in 2005 when the National Trust for Historic Places listed the entire NLCS as one of the nation’s most endangered historic places.

Paleontological resources. Fossils that are hundreds of millions of years old are preserved on BLM lands, and they provide important insight on topics such as the extinction of dinosaurs and the evolution of plant and animal communities.

Wildlife habitat. BLM lands are host to 228 plant and animal species listed as threatened or endangered and to more than 1,500 “sensitive” species. These lands provide 90 million acres of key habitat for big game such as antelope, mule deer, bighorn sheep, and elk. The lands also are important for 400 species of songbirds, and the future of sage grouse populations in the West will depend on the BLM’s protection of their habitat.

Ecosystem services. Native plants on BLM lands help to prevent the spread of costly invasive weeds, reduce the risk of wildfires, and minimize soil erosion to help keep waterways clean and healthy.

Natural playgrounds. Recreational opportunities abound on BLM lands. In 2004, some 54 million people visited these areas to hike, camp, picnic, hunt, fish, ride horses, raft, canoe, and use off-road vehicles.

Open space. BLM lands are increasingly valuable places to find solitude and silence. In the lower 48 states, nearly two-thirds of BLM lands are within an hour’s drive of urban areas, and 22 million people live within 25 miles of BLM lands.

BLM LANDS ARE RICH IN UNIQUE CHARACTERISTICS THAT MERIT CONSERVATION: WILDLIFE, CLEAN WATER, CULTURAL AND HISTORIC RELICS, OPEN SPACE, AWESOME SCENIC VISTAS, AND SOUL-NOURISHING SOLITUDE.

All of these values are hallmarks of the NLCS. The NLCS brings together many of the BLM’s most sensitive landscapes: National Monuments, National Conservation Areas, Wilderness Areas, Wilderness Study Areas, Historic Trails, and Wild and Scenic Rivers. According to former Secretary of the Interior Bruce Babbitt, the NLCS “was created to safeguard landscapes that are as spectacular in their own way as national parks.” Importantly, though, NLCS areas are intended to embody a different concept than national parks by minimizing visitor facilities and the evidence of civilization’s encroachment to provide visitors a chance to see the West through the eyes of the first native peoples and pioneers.

Unlike the National Park Service, with its clear mandate to conserve natural and historical resources, the BLM must manage its lands and waters for a variety of uses that can and do conflict. In 2004, the BLM reported that 224 million of its 261 million acres were available for energy and mineral exploration and development. The agency manages approximately 53,000 oil, gas, coal, and geothermal leases, and 220,000 hardrock mining claims. In addition, 159 million acres are in livestock grazing allotments. Another 11 million acres are forest, much of which the BLM manages for commercial logging, as in western Oregon, where 0.5 million of the 2.4 million forest acres are managed intensively for timber production.

Accordingly, the BLM maintains that it is a multiple-use agency, while acknowledging that conservation is part of its mission. Indeed, federal regulations make it clear that multiple use does not trump the need for the BLM to also meet conservation goals and manage for recreation, scenic, scientific, and historical values. Although the agency can allow resource development even if it will cause degradation, its discretion is not unlimited. Moreover, the agency holds considerable—but underused—authority to restrict environmentally adverse activities to protect the land and its flora and fauna.

The BLM’s conservation responsibility and authority derive primarily from the Federal Land Policy and Management Act of 1976. This legislation makes clear that rare and special places could be protected from other competing or damaging uses and that multiple use does not mean that every acre must or should be available for all uses. In this way, BLM lands taken as a whole serve multiple uses, leaving ample room, even an obligation, to manage special places with conservation goals as paramount over other uses.

In fact, the BLM has legal directives to preserve most of the 26 million acres of the NLCS, particularly National Monuments and Wilderness Areas. Although the designation of the NLCS itself carries no statutory authority, the individual pieces of the system were each designated via specific authorizing legislation that impart a specific conservation purpose. These include the Antiquities Act of 1906, the Wilderness Act of 1964, the Wild and Scenic Rivers Act of 1968, and the National Trails System Act of 1968.

Skewed policy agenda

The BLM’s policy agenda, however, has often been dominated by considerations that can work against conservation. The nation’s rising energy needs are placing particular pressures on BLM lands. In order to expedite oil and gas leasing and development, the agency is briskly leasing wild lands, despite a backlog of leases and drilling permits still unused by the oil and gas industry and record levels of permits issued nationally. Since 2003, the BLM has continually offered oil and gas leases on spectacular roadless lands in Utah and Colorado that have been identified (in many instances by the agency itself) as harboring wilderness values. More than 50,000 acres in proposed Colorado Wilderness Areas have been leased in the past two years alone, and more than 100,000 acres in Utah have been offered at lease auctions.

Recent BLM management plans open almost entire areas to oil and gas development. In three recent draft and final land-use plans affecting 8.6 million acres (Greater Otero Mesa in New Mexico, the Great Divide in Wyoming, and the Vernal Field Office in Utah), 97% of the total area is proposed to be open to oil and gas development. In an August 2005 speech to the Rocky Mountain Natural Gas Strategy Conference, Assistant Secretary of the Interior Rebecca Watson listed as a notable accomplishment that the BLM is processing applications for drilling permits in record numbers, with the current administration issuing more than 17,000 permits during the past four years, 74% more than the Clinton administration. The BLM estimates that it will process more than 12,000 drilling permits in the next fiscal year.

A June 2005 report by the Government Accountability Office found that the BLM’s rush to drill keeps the agency too busy to monitor and enforce clean air and water laws. During the past six years, the number of drilling permits issued annually by the BLM tripled from 1,803 to 6,399. Four of the eight BLM field offices that issued 75% of these drilling permits did not have any plans in place to monitor natural or cultural resources. The report noted that BLM staffers were too busy processing drilling permit applications to have time to develop the monitoring plans.

In addition, the BLM, like other federal land management agencies, has long been caught in the jobs-versus-the-environment debate, which creates pressure to keep public lands open to oil and gas development, mining, and logging. But recent economic analysis is helping to dispel the perception that conservation on public lands is incompatible with economic prosperity.

A 2004 study by the Sonoran Institute found that protected public lands, including BLM lands such as National Monuments, are increasingly important to the economy of western communities. The changing western economy means that historically important resource extraction sectors provide fewer jobs comparatively; personal income from resource industries such as mining, oil and gas development, and ranching represent just 8% of total personal income, down from 20% in 1970, although there is wide variation among states. Meanwhile, counties with protected public lands or close to protected public lands tend to have the fastest local economic growth. Areas in and around protected areas are most likely to attract business owners, an educated work force, producer services, investment income, retirees, and real estate development, which are factors of a diverse and growing economy. For example, since the designation of the BLM’s Grand Staircase-Escalante National Monument in Utah in 1996, neighboring Garfield County has experienced a shift from wages declining at a rate of 6% to wage rate growth of 7%, as well as declines in unemployment and significant growth in personal income. Still, as long as the mythology of jobs versus environment prevails, the BLM is vulnerable to pressure from rural western communities, politicians, and extractive industries, who argue that a federal emphasis on conservation will set up roadblocks to productive uses of natural resources.

A JUNE 2005 REPORT BY THE GOVERNMENT ACCOUNTABILITY OFFICE FOUND THAT THE BLM’S RUSH TO DRILL KEEPS THE AGENCY TOO BUSY TO MONITOR AND ENFORCE CLEAN AIR AND WATER LAWS.

Another factor muddling the picture is the BLM’s budget structure. With its many categories and subcategories, the structure effectively discourages program integration and limits budgetary accountability. For example, the NLCS receives funding from at least seven different budget categories and subcategories, making it difficult for the BLM and members of the public to calculate the amount of money devoted to the NLCS. There also is a significant mismatch between congressional budgets and the nature of the work that the BLM performs. The BLM’s work is governed by multiple-use mandates and is ecosystem-based. Ecosystem management is a multiyear process that requires secure, consistent funding and adequate data. Congressional budget authorizations, on the other hand, normally cover only one year at a time and thus pose a significant impediment to planning and implementing longer-term projects needed to restore or protect ecosystems.

Because the BLM doesn’t include a separate budget line for the NLCS within its agency budget, and because of reallocations of funding and other cuts during the year, it is difficult to determine the amount that was allocated to the NLCS in the past fiscal year (FY 2006), or this year (FY 2007). Best estimates, however, make it clear that the NLCS operates with bare-bones funding—probably about $42 million for FY 2006, with even less for FY 2007. For comparison, consider that the NLCS budget is roughly 2.5% of the BLM’s $1.8 billion budget, for 10% of the agency’s most precious lands and waters. The NLCS’s funding is less than half of the allocation for the BLM’s energy and minerals management program, for which $108 million was appropriated for FY 2006, with $135 million proposed for FY 2007. NLCS funding also is a fraction of the funding for comparable land management agencies. The 2006 budget for NLCS translates to about $1.70 per acre, compared to the roughly $5 per acre that goes to the National Wildlife Refuge System and $19 per acre to the National Park Service. Funding for land acquisition by the four major federal land management agencies, including the BLM, via the Land and Water Conservation Fund, has declined by 80% in the past decade.

Taking stock of stewardship

Good management practice dictates that the BLM should establish a regular means of assessing the condition of its special areas in order to provide early warning of change, make conservation a priority among its other important objectives, help determine budgets, and provide the public and Congress the means to gauge progress and hold the agency accountable.

The Government Performance and Results Act of 1993 (GPRA) provides an impetus for land management agencies to plan, implement, monitor, and report on progress toward performance goals. And, in keeping with the GPRA framework, the BLM’s 2004 annual report cites several goals and accomplishments for resource protection, such as statistics on acres of riparian land restored and cultural resources stabilized.

What these general overview data do not offer is a full picture of trends, conditions, and conservation stewardship capacity. In particular, the ability to gauge whether the BLM is meeting the unique conservation mandates of National Monuments and other places typically set aside to protect specific wildlife, plants, and their habitat, as well as large ecosystems and wilderness, is shortchanged. Nor does the BLM produce annual reports for individual National Monuments and Conservation Areas with consistent, regular, and quantitative measures of progress toward specific conservation goals.

In order to fill this void, the Wilderness Society and World Resources Institute undertook a preliminary assessment to determine whether the BLM is meeting its conservation mandate. We decided to focus on the NLCS because its areas carry a clear conservation aim via proclamation or legislation, and because, by mandate, the BLM currently is creating management plans that will institutionalize conservation objectives for areas within the system.

For simplicity’s sake, we kept the scope of our assessment relatively narrow. We focused on 15 specially designated areas or “units” in the NLCS, some selected to reflect geographic and ecosystem diversity and others selected randomly. We then homed in on issues relevant to stewardship and ecosystem condition, such as accountability, natural resource monitoring, cultural resource protection, and visitor management. For each issue, we identified a series of indicators and measures: 35 indicators in total. For example, we used the degree to which an area is fragmented by roads as one measure of ecosystem health.

Overall, we found that the BLM is woefully lacking in funds, leadership, and data to achieve its conservation mission on NLCS lands. In our report, we include a scorecard that summarizes our findings by issue and NLCS unit. Grades of C’s and D’s dominate for issues such as the capacity to protect wild and untouched areas and to monitor special natural resources. Although the NLCS as a whole scored no higher than a C for any issue, some individual areas merited A’s and B’s for select aspects of stewardship and conservation. In particular, we found:

An understaffed and inadequately empowered conservation system. NLCS managers (the BLM staffers responsible for the day-to-day management of individual NLCS units) have neither the stature nor the authority to serve as the public face of conservation for the BLM’s special landscapes and to ensure that conservation is prioritized by their agency. Only one-third of the managers interviewed are vested with line authority: the formal authority to direct staff, with clear, consistent responsibilities to make decisions, issue orders, and allocate resources.

Most BLM National Monuments and Conservation Areas are understaffed, mostly because of funding constraints. Most areas lack dedicated time from archaeologists, ecologists, law enforcement rangers, and public education specialists. For example, only one-third of the 15 Monuments and Conservation Areas examined have more than one full-time law enforcement ranger; several have only a half-time ranger. A ranger must patrol, on average, 200,000 acres, making it impossible to check remote areas or specific sites regularly. Growth in enforcement staff needs to keep pace with growth in use; in some areas, visitor numbers have quadrupled in the past five years.

Although most National Monuments were designated under the Antiquities Act for “scientific study” and many Conservation Areas offer excellent scientific learning opportunities for scientists, students, and members of local communities, few of them have the staff to capitalize on those objectives. About 80% of National Monuments and Conservation Areas have a public education specialist, but typically this is less than a full-time or even half-time outreach professional.As one BLM staff member said,“We always identify in our work plans that we’re going to use environmental education and interpretation as a major tool to get public compliance with land stewardship, but then we fail to fund environmental education, or try to add it to an already overburdened staff person.”

A paucity of natural resource monitoring and trend data. Large data gaps make it difficult, even impossible, for the BLM to effectively manage its conservation lands and waters. For example, only 4 of 15 National Monuments and Conservation Areas conducted complete inventories for invasive weeds, and rarely do Monuments and Conservation Areas have comprehensive water-quality monitoring programs.

Collecting more data is not always the priority need. Our queries of BLM staff suggest that in some places, much detailed data already is available on key indicators of resource condition. For example, the Headwaters Forest Reserve in California has summarized trend data for threatened and endangered species into an easy-to-interpret format. More often, however, the data are not rendered into useful information; they are not compiled, integrated, and analyzed to facilitate place-specific assessments by NLCS managers.

Data on recreational activities, which are important for gauging pressures on resources and deciding how many law enforcement rangers are needed, and where, are fraught with inconsistency. The BLM does track total visitors to each part of the NLCS, as well as nearly a dozen recreational uses. However, during the past five years, some NLCS units have changed how they measure visitor use, rendering trend data nearly useless.

Ecosystem health: Condition unknown. Data to assess ecosystem condition in the NLCS are poor, due in part to the lack of comprehensive and consistent place-specific monitoring programs. One significant concern is the degree to which wildlife habitat is fragmented by roads and routes. On average, 76% of land in the 15 areas examined is within one mile of a road, and 90% is within 2 miles of a road. Abundant research has demonstrated that roads can have a negative impact on wildlife at these distances, and they also facilitate damage from off-road vehicles, the invasion of non-native animal and plant species, and the spread of fires.

Available data reflect widely varying land health conditions systemwide. For example, 95% of the riparian areas assessed in Colorado’s Gunnison Gorge National Conservation Area were judged to be in “proper functioning condition” (meaning that they are able to minimize erosion, improve water quality, and support biodiversity). In contrast, only 7% of the streams in Colorado’s Canyons of the Ancients National Monument meet the proper functioning standard. Similarly, invasive species problems range from areas where nearly all of the land—tens of thousands of acres— is affected, to areas with virtually none affected.

Endangered cultural resources. The condition of cultural resources is difficult to summarize, because the BLM lacks the capacity to adequately monitor cultural sites. Indeed, the agency has comprehensively inventoried cultural resources in just 6 to 7% of the total area encompassed by Monuments and Conservation Areas. Some of the archaeologists interviewed thought the majority of their sites were in stable condition, but all described sites they knew were at risk, typically due to erosion, accessibility, looting, or careless campers.

The majority of cultural inventories are carried out when a drilling or grazing permit, power line, or other development is proposed and the BLM must meet its legal obligation to comply with the National Historic Preservation Act and assess impacts to cultural resources in those permit areas. With the rapid increase in permit application processing, too often these cultural resource compliance surveys are conducted late in the process: not when the agency is considering whether to lease, but after private investments have already been made. And, unfortunately, many BLM archaeologists report that the majority of their time (60 to 70%) is occupied by compliance work related to proposed development. Few have time or funds to undertake landscape-scale archaeological surveys in areas of highest priority to inform land-use plans, road closures, and the management of public access and recreation.

Reinventing a conservation agency

Elevating and advancing the BLM’s conservation mission, especially in the face of conflicting priorities and pressures, requires actions by the agency, Congress, and concerned stakeholders nationwide.

Among the steps that the BLM can take, some will require a shift in priorities, but most will require only modest amounts of funding. For example, the BLM should:

THE BLM IS WOEFULLY LACKING IN FUNDS, LEADERSHIP, AND DATA TO ACHIEVE ITS CONSERVATION MISSION FOR NLCS LANDS AND WATERS.

Undertake regular indicator-driven conservation assessments. The old business adage “you manage what you measure” applies equally to the BLM and conservation. Setting specific conservation goals for the NLCS and measuring progress toward them would help the agency focus on conservation as a priority, and reward progress. The indicators of progress need not be all or only the ones that we used in State of the NLCS; indeed, the BLM should engage in a process with nongovernmental organizations and other partners to agree on a set of measures for natural and cultural resource health. The agency should then commit to tracking those indicators at the NLCS unit level in annual or biennial reports. This would enable basic public oversight and foster informed participation in public lands planning, management, and protection. Reports undoubtedly would improve the public’s impression of the BLM as an accountable and capable conservation organization. (We recently learned that the BLM does plan to begin issuing annual reports on its NLCS National Monuments and Conservation Areas in 2006; it remains to be seen what data and quantitative measures of progress the reports will include).

Plan for resource conservation. The BLM is still crafting “Resource Management Plans” (the BLM’s term for land-use plans) for about half of its National Monuments and Conservation Areas. These plans, which serve as blueprints for decisionmaking for up to two decades, are a sterling opportunity to provide clear and unequivocal conservation guidance. For example, plans should give direction regarding species monitoring and water-quality monitoring, and they should include a cultural resource protection program. Also critical is the inclusion of a plan for roads and travel within the areas that minimizes damage from motorized vehicle use and closes unnecessary or damaging roads, with a specific time frame for closures.

Replicate best practices for conservation. The State of the NLCS report identifies more than a dozen laudable examples of BLM projects that are creatively improving or protecting resources in NLCS areas. For example, in Arizona’s Agua Fria National Monument, volunteers and students record petroglyphs, whereas in Idaho’s Snake River Birds of Prey National Conservation Area, BLM staff place signs and paths strategically to guide visitors away from overused campsites and reduce off-road driving to prime locations for raptor viewing. Also in Idaho, at Craters of the Moon National Monument, the BLM found that adding the image of an American flag to signs along roads and trails discourages the use of the signs for target practice, reducing the need for their costly replacement. To encourage such best practices, restoration or land protection ideas could be shared at an annual BLM “NLCS Conservation Congress” and highlighted with annual BLM conservation awards for outstanding personnel and projects.

Expand site steward programs and volunteer programs. More than half of the National Monuments and Conservation Areas examined benefit from strong and effective cultural resource stewardship programs that use volunteers— often archaeologists themselves—as site monitors, educators, and protectors of special places. These volunteers help shorthanded BLM staff and enhance the agency’s capacity to accomplish its goals. Volunteers in many areas also assist with natural resource protection and restoration, undertaking tasks such as removing invasive plants and converting unnecessary roads to foot, horseback, and mountain-bike use.

For its part, Congress can play a major role in reinventing the BLM as a conservation agency. It should:

Give the NLCS a statutory basis. Just as the National Park Service Organic Act of 1916 provides the Department of the Interior with a clear management mandate for parks, the NLCS needs a similar basic law to guide its management. Congress should provide that law in the form of an NLCS “organic” act giving BLM a clear mission of protecting the NLCS. An NLCS Act need not change existing uses of NLCS lands but could help prioritize and clarify the BLM’s conservation agenda.

Increase funding for BLM conservation. Congressional funding priorities should include appropriations for natural resource monitoring, cultural resources inventory and monitoring, habitat restoration, and law enforcement, particularly in areas where visitor use is growing most rapidly or resources are most fragile. Another priority is funding the implementation of Resource Management Plans for various NLCS units that the BLM is scheduled to complete in 2006 and 2007.

Additional funding for land acquisition also is critical for the BLM to fend off encroaching residential and commercial development of private inholdings, and to create buffer zones around its most special ecosystems. One source of revenue for conservation budgets could be some of the hundreds of millions of dollars generated from mineral development on BLM lands.

Reorient the BLM budget structure toward conservation. Congress should create a budget category for management activities devoted to conservation and ecological restoration. Currently, conservation funds are scattered in several diverse budget categories. A subcategory of the new conservation/ecological restoration budget category should be devoted to the NLCS.

This is a time of great challenge for the NLCS. Without BLM leadership, congressional funding, and citizen involvement, significant segments of the NLCS are likely to suffer serious degradation, possibly forever. The path forward is clear: It is up to the nation to seize the opportunity to protect some of its greatest public lands.