Forum

Forum

Natural gas crisis?

Gary J. Schmitt’s “Natural Gas: The Next Energy Crisis?” (Issues, Summer 2006) examines a topic that deserves more attention than it has received. Although rising oil prices have preoccupied the U.S. public and policymakers, we are in the midst of a natural gas crunch that is at least as serious. Natural gas, as Schmitt points out, plays an increasingly large role in electricity generation and home heating. Equally important, it is a key ingredient in a wide variety of manufactured products ranging from cosmetics to fertilizers.

Unlike oil, which is traded at a uniform price in a global market, natural gas prices vary widely from country to country. U.S. prices are the highest in the industrialized world, and this fact works to the disadvantage of our manufacturing sector. It has been estimated that natural gas price differentials have resulted in the loss of 2.8 million U.S. manufacturing jobs.

These concerns about natural gas prompted our support for the Deep Ocean Energy Resources (DOER) Act, which passed the House of Representatives on June 29, 2006. The measure opens currently restricted areas of the outer continental shelf to oil and gas exploration, areas that include reserves sufficient to supply enough domestic natural gas to meet U.S. needs for years to come. To win support from the Florida and California House delegations, we included provisions permitting individual states to maintain or reimpose drilling prohibitions up to 100 miles from their shores.

We believe that the DOER Act addresses the growing natural gas crisis explored by Schmitt and does so in an environmentally and fiscally responsible way. The measure is now in the hands of the Senate, and that chamber has the opportunity to avert a predicament that will only worsen if we fail to deal with it.

REP.NEIL ABERCROMBIE

Democrat of Hawaii

REP.JOHN PETERSON

Republican of Pennsylvania


Energy and security

In “The Myth of Energy Insecurity” (Issues, Summer 2006), Philip E. Auerswald argues that “increasing oil imports do not pose a threat to long-term U.S. security” because today’s energy markets make the threat of an economic shock from a severe oil price change unlikely, and high prices will accelerate technological change. Although the U.S. economy may withstand high energy prices, and higher prices have modestly accelerated the pace of technological change, Auerswald misidentifies the source of the energy security threat, exaggerates the likelihood that prices will reduce the pace of consumption, and draws exactly the wrong conclusion.

The United States is more energy-insecure today than at any time in the past 30 years. The energy dependence of the United States and other consuming nations (not the level of imports) is rapidly eroding U.S. power and influence around the world in four ways. First, consuming nations are reluctant to join coalitions to combat weapons proliferation and terrorism because of their dependency on their oil suppliers or their desire to secure access to exploration acreage. Chinese resistance to sanctions on Iran or Sudan and European resistance to pressure on Iran or Russia are good examples of this phenomenon. Second, when exporters have very high revenues, with earnings far in excess of those needed to finance their own budgets, they act with impunity toward their own people, their neighbors, and/or the United States: Witness Russia’s pressure on its neighbors, Iran’s flouting of international pressure regarding its nuclear program, and Venezuelan President Chavez’s competition with the United States for influence in the hemisphere. Third, high revenues are impairing the efficiency of oil markets by encouraging a new resource nationalism that restricts international access to new oil exploration acreage in Russia, Venezuela, and Ecuador, as well as most of OPEC. Most national oil companies are historically highly inefficient and undercapitalized because of the formidable needs of countries to tap their earnings for government budgets rather than reinvestment. Fourth, new nonmarket economies such as China and India are eroding U.S. influence in Latin America and Africa by their willingness to subsidize investment in exploration and to invest without insisting on host country support for transparency, governance, or acceptable human rights practices.

Looking ahead, the trends are terrible: Even at current price levels, global demand is rising rapidly and is likely to double in volume by 2030, with OPEC’s share increasing. Reducing the revenue stream to our adversaries and competitors and using government policy to change the way the world fuels transportation are the only way to reduce this rapidly growing security threat. Change will be slow, given the volume of energy that the world consumes and the investment in existing delivery infrastructure. If high prices were leading to greater fuel economy in the United States and developing Asia, we might have hope that the market would cure this threat, as Auerswald suggests. There is no evidence that this is occurring; U.S. and global demand continues to rise. Energy security is a public good; the market will not provide it. Even today’s prices do not reflect the security externalities of oil consumption. Energy insecurity is not a myth. It is a clear and present danger, and the sooner we accept that, the sooner we will muster the political will to address it.

DAVID L. GOLDWYN

President

Goldwyn International Strategies

Washington, DC

David L. Goldwyn is co-editor of Energy and Security: Toward a New Foreign Policy Strategy.


Philip E. Auerswald’s article brings much common sense to the debate over U.S. oil dependence, which is hyped up by calls for the elimination of U.S. oil imports; nonetheless, in my view we should still be doing more to promote oil conservation.

Auerswald points out that the U.S. oil price is determined on world markets, regardless of how much oil we import. He also argues that the U.S. economy is not especially vulnerable to oil price shocks, given the small share of oil products in gross domestic product. But although the recent trebling of oil prices has not derailed the U.S. economy, a future price shock may have more serious consequences; for example, if the economy is already in a recession, or if currency and other financial speculators are jittery about large trade and fiscal imbalances. Given very tight conditions in the world oil market, any number of economic or political developments might precipitate such a price shock. Economic analyses suggest that an oil tax of roughly $5 per barrel or more might be warranted to address various macroeconomic risks from oil price shocks that private markets fail to take into account.

As regards the environment, Auerswald is right that the most important worry regarding oil consumption is its contribution of greenhouse gases to future climate change. Economists have attempted to quantify the potential damages to world agriculture, coastal activities, human health, and so on from greenhouse gases, even making a crude allowance for the risk of extreme climate scenarios. Although contentious, these studies suggest that a tax of up to $50 per ton of carbon, equivalent to an additional $5 per barrel of oil, should be imposed, so that market prices reflect environmental costs.

Geopolitical costs include constraints on foreign policy due to reluctance to upset major oil producers. Oil revenues may also end up funding insurgents in Iraq, other terrorist groups, or rogue states. However, although this petrodollar flow is of major concern, we have limited ability to prevent it in the near term; even though an oil tax of $10 per barrel would reduce oil imports, it would lower long-run world prices by only around 1.5% at best.

Auerswald is correct that high oil prices are the best way to induce households and firms to economize on all oil uses; on economic efficiency grounds, the government should phase in a tax of around $10 per barrel or more, using the revenues to cut the deficit or other taxes. However, the key to seriously reducing oil dependence over the longer term is to create conditions conducive to the development of oil-saving technologies. This requires R&D investments across a diverse range of prospects (such as plug-in hybrids and hydrogen vehicles). But it also requires that markets anticipate the persistence of high oil prices; to this end, the government might also commit to a floor price by increasing the oil tax in the event that oil prices fall in the future.

IAN PARRY

Senior Fellow

Resources for the Future

Washington, DC


Revamping the military

In “The Pentagon’s Defense Review: Not Ready for Prime Time” (Issues, Summer 2006), Andrew F. Krepinevich Jr. provides a good summary and valid critique of the Pentagon’s recently completed Quadrennial Defense Review (QDR). The review sets out three main potential threats to U.S. security: radical Muslim jihadism, the rise of China, and nuclear proliferation. As Krepinevich points out, the QDR is most illuminating regarding U.S. strategy for dealing with the first of these threats. He may be asking too much of this document, however, in suggesting that it should provide more convincing responses to the second two. The QDR is a Department of Defense (DOD) document, necessarily largely limited to the role of that department in advancing U.S. security. About challenges without a viable military solution, it is necessarily reticent.

As Krepinevich notes, a rising China could, like Germany in the late 19th and early 20th centuries, develop in ways that threaten the international order. Not the most likely development, but one possible enough to justify a hedging strategy. The DOD can help create military capabilities and alliance relationships that hedge against an aggressive China. To the extent that the United States can influence China to move in a different direction, however, that task will fall largely to other agencies: State, Treasury and the U.S. Trade Representative, for instance. One would not look to the QDR for a definitive treatment of those efforts.

Similarly with nuclear proliferation. The QDR may say little about the DOD’s plans to stem such developments, because there is little that the DOD can do to that purpose. Disarming strikes are probably counterproductive. Brandishing the threat of such strikes has already proven so. The current efforts to stem nuclear proliferation largely rest with the State Department. If those efforts fail, as seems quite possible, the DOD will undoubtedly have to adjust military strategy, bolster alliance relationships, and forgo certain military options with respect to nuclear-armed adversaries, as Krepinevich suggests. Elaborating on the steps that the United States might have to take to accommodate itself to a nuclear-armed Iran or North Korea, however, would probably undercut whatever chance remains of forestalling such a development. Again, some degree of reticence may be appropriate at this stage.

Krepinevich is right to point out that although the QDR devotes most of its attention to the unconventional threats faced by the United States, the Pentagon continues to spend most of its money dealing with the conventional ones. This is not entirely illogical. The conventional threats, although less likely, could be far more destructive. Nevertheless, there is a gap between the DOD’s rhetoric and its budget. In a recent directive, the DOD assigned stabilization operations and counterinsurgency an importance commensurate with major combat. This is not yet fully reflected in its programatics. The department is continuing to push toward a smaller, more agile, more highly equipped military designed to fight and win lightning conventional battles. Fifteen years of post–Cold War experience suggests that current force is already more than adequate for that purpose, and that what is needed more urgently is a more numerous, less technologically dependent military capable of long-term commitment and persistent engagement. As Krepinevich suggests, the Air Force and the Navy should remain largely focused on the conventional battle, while the Army and the Marine Corps bolster their capacity to handle this second challenge.

JAMES DOBBINS

Director

RAND International Security and Defense Policy Center

Arlington, Virginia


Nuclear waste standoff

As chairman of the House subcommittee with jurisdiction over the federal government’s nuclear R&D programs, I read Richard K. Lester’s article on reprocessing and the Global Nuclear Energy Partnership (GNEP) (“New Nukes,” Issues, Summer 2006) with interest.

Although I commend him for his vigorous analysis of GNEP, Lester misses the timing, intent, and promise of the program. On count after count, he asserts that interim storage trumps GNEP. But GNEP was never intended to solve any short-term problems. Nor was it designed to revitalize the U.S. nuclear industry by jump-starting the construction of new nuclear power plants in the near term. Lastly, GNEP was not intended to supplant the need for a permanent repository or preclude the option of interim storage.

The purpose of GNEP is to supplement these tools for managing and disposing of our nuclear waste in the long term. It represents a comprehensive strategy that includes the research, development, and yes, the demonstration of a system of technologies and processes that make up an advanced fuel cycle. In developing this cycle, the goals are (1) to extract as much energy as possible from our nuclear fuel; (2) to minimize the volume, heat, and radioactivity of the waste that will ultimately require permanent disposal; and (3) to do so in a way that is economically viable and proliferation-resistant.

CHANGING STANDARDS AND RULES TO ENSURE THE CONSTRUCTION OF A SCIENTIFICALLY DEFICIENT YUCCA MOUNTAIN REPOSITORY IS IRRESPONSIBLE.

One of the technology components of this advanced fuel cycle, UREX+, is well researched and well understood. It makes little sense to delay a demonstration of this reprocessing technology until it is desperately needed. By then it may be too late. Instead, we should use the time we have now to demonstrate that the reprocessing technology works and to further refine it and improve its economic viability.

Other advanced fuel cycle technologies, including advanced recycling reactors, show great promise for minimizing future waste. But they still require significant R&D in the laboratory and through computer modeling and simulation. That is why I do not support any other advanced fuel cycle technology demonstrations until the Department of Energy (DOE) (1) conducts a comprehensive systems analysis of different possible fuel cycle configurations, (2) uses that analysis to develop a detailed R&D plan, and (3) submits the plan to peer review before it is finalized.

I do not believe that this approach to the development of an advanced fuel cycle threatens the revitalization of the nuclear industry in America. Economics today isn’t economics forever. Lester fails to even mention the cost of addressing global climate change. I would rather see DOE develop a solid understanding of the technologies and their costs so that policymakers and investors alike will know the conditions under which these technologies will succeed in the marketplace of the future.

For these reasons, the nuclear industry and others hoping for a nuclear renaissance should support GNEP as another option for ensuring the long-term viability of nuclear energy in the United States.

REP. JUDY BIGGERT

Chairman, Subcommittee on Energy

House Committee on Science


We discuss two possibilities not mentioned by Richard K. Lester in his excellent analysis of President Bush’s Global Nuclear Energy Partnership (GNEP) or in his conclusion that the government should give priority to accepting spent reactor fuel from the utilities for interim storage, as opposed to committing itself to costly, premature, and unproven fuel-reprocessing and fast-reactor technologies for burning and eliminating the minor actinides as well as plutonium.

Interim storage under government auspices, with the Department of Energy (DOE) beginning to take title to the spent fuel (as it’s been legally obligated to do since early 1998), is indeed now called for pending a redesign of the troubled geologic repository project at Yucca Mountain in Nevada and a licensing of that repository in a three-to-four-year proceeding before the Nuclear Regulatory Commission (NRC).

We think that by far the quickest and best means of accomplishing this is for the utility consortium Private Fuel Storage (PFC) to establish the storage facility, licensed by the NRC in February, on the reservation of the Skull Valley band of Goshute Indians in Utah, about 50 miles southwest of Salt Lake City. This storage project, initiated by PFS eight years ago under a contract promising handsome (but as yet unrevealed) benefits to this tiny band of Goshutes, has been bitterly opposed by the state of Utah throughout the tortuous licensing effort before the NRC. Up to 40,000 metric tons of spent fuel could be accommodated there in dry-cask storage, or nearly two-thirds of all that is intended for Yucca Mountain under present law.

But for a variety of political and financial reasons, this storage facility won’t come into being unless the U.S. government gets behind it. Utah and its congressional delegation continue to fight the project, through appeals to the Bureau of Land Management (which is still to approve it) and by discouraging the utilities that make up PFS from actively pressing on to the next stage.

Lester calls for moving spent fuel from reactor sites to “one or a few secure federal interim storage facilities” but doesn’t suggest where this might be. Candidate sites surely would have to be on federal reservations in the eastern half of the country, where most of the nuclear stations are located, and the most likely of all might be the Oak Ridge reservation in Tennessee and the Savannah River Site in South Carolina. But massive public opposition could ensue, and even if state acceptance were somehow finessed, getting the project approved by Congress and through licensing and court challenges might take at least a decade.

The PFS site in Utah, on the other hand, might be up and running within about three years if Congress adopts legislation making it the destination for spent fuel accepted by DOE from the utilities, consistent with the Nuclear Waste Policy Act. John Parkyn, board chairman and CEO of PFS, has urged the responsible House and Senate committees to respond accordingly, but so far neither Congress nor the Bush administration is moving to make this happen. Yet hundreds of millions of dollars in annual savings for the government are at stake.

Achieving a centralized solution to spent fuel storage should take the pressure off the Yucca Mountain project for early licensing and encourage a deliberate, careful redesign of the repository that is consistent with the site’s natural characteristics.

As we endeavored to explain in our piece “Proof of Safety at Yucca Mountain” in Science (October 21, 2005), the presence of oxygen and high humidity there in the “unsaturated zone” high above the water table makes the most recent DOE design exceedingly problematic. In this design, waste containers would have a corrosion-resistant nickel alloy outer shell and be placed beneath a titanium “drip shield.” But modeling performance assessment from exceedingly complex corrosion chemistry over hundreds of thousands of years is not a credible undertaking.

We advocate a capillary barrier concept, wherein first a layer of coarse gravel is placed around the waste containers and then a layer of fine sand is draped over the gravel. Any water dripping from the tunnel ceiling would be seized by strong capillary forces in the sand and moved slowly away. But proof of safety turns on the gravel layer, where capillary forces are absent. The containers ultimately fail because of corrosion from the water vapor and oxygen that are everywhere present. But the radioactive elements that emerge would form a thin coating on the surfaces of the gravel particles and defuse so slowly within the gravel as to be effectively trapped.

Compared to corrosion chemistry, such diffusion is a far simpler physical process that lends itself to measurement in a laboratory mockup and to robust extrapolations over vast time periods. Proof of safety in an absolute sense is beyond reach, but the capillary barrier concept deserves careful, unhurried testing and analysis.

Having a place for spent fuel storage at Skull Valley should do much to foster the right conditions and attitudes for the trial of all promising new design concepts.

LUTHER J. CARTER

Independent Journalist

Washington, DC

THOMAS H. PIGFORD

Professor Emeritus

University of California, Berkeley


The authors of “Nuclear Waste and the Distant Future” (Per F. Peterson, William E. Kastenberg, and Michael Corradini, Issues, Summer 2006) believe that “a key regulatory decision for the future of nuclear power is the safety standard to be applied in the licensing of the radioactive waste depository at Yucca Mountain, Nevada.” Implied in their argument endorsing the Environmental Protection Agency’s (EPA’s) proposed unprecedentedly high limits on risk to the public from a Yucca Mountain repository is the fear that the application of conventional risk regulation and protection principles for nuclear facilities might result in Yucca Mountain not being licensable. The EPA seemed to share the same fear when faced with the Supreme Court ruling that its standard for Yucca Mountain must include compliance assessment at the time of projected maximum risk.

Peterson et al.’s, demand for regulatory equity with hazardous waste regulation claims that “the longest compliance time required by the EPA is 10,000 years for deep-well injection of liquid hazardous wastes,” but neglects to mention that this injection regulation also specifically extends for as long as the waste remains hazardous.

The EPA’s proposed bifurcated regulation, touted by Peterson et al., leaves the EPA standard intact for the first 10,000 years, at a mean all-pathways individual dose of 15 millirems per year, with a separate groundwater protection standard that conforms with the EPA Safe Drinking Water Act standard of 4 millirems per year. For the period from 10,000 years to one million years, the protective groundwater standard is eliminated and the unprecedented median dose limit of 350 millirems per year is established. Because of the broad range of uncertainty in the failure rate caused by corrosion of the waste package in the Department of Energy’s performance model, the median dose limit of 350 millirems per year is equivalent to a mean dose of about 1,000 millirems per year. No regulatory body in the world has set such a high dose limit for the public’s exposure to anthropogenic radiation.

Peterson et al. are promoting a Yucca Mountain safety standard that defies three long-standing principles in the international community of nuclear waste regulation:

  1. The current generation should not impose risks on future generations that are greater than those that are acceptable to the current generation.
  2. The risks to the public from the entire nuclear fuel cycle should be apportioned among the various activities, with waste disposal representing a small portion of a total dose limit of 100 millirems per year. The National Research Council recommended a range for waste disposal of 2 to 20 millirems per year for a Yucca Mountain repository.
  3. Highly variable natural radiation background is not a reasonable basis for setting health-based regulatory limits on public exposure to anthropogenic radiation.

Changing standards and rules to ensure the construction of a scientifically deficient Yucca Mountain repository is irresponsible. Thwarting the international consensus on safety principles to meet this end is irrational.

ROBERT R. LOUX

Executive Director

Agency for Nuclear Projects

Office of the Governor

Carson City, Nevada


Per F. Peterson, William E. Kastenberg, and Michael Corradini discuss an important issue: What are appropriate standards for hazardous material disposition when the time scales for acceptable risk extend well beyond the realm of human institutional experience? How should the benefits and risks of alternatives be weighed in the regulatory process?

The authors note that 10,000-year standards to protect human health, although rare, are not new. However, the Environmental Protection Agency (EPA) is now required to set a radiation standard for nuclear spent fuel disposition at the proposed Yucca Mountain (YM) repository for a time period that is at least an order of magnitude longer. Its proposed “two-tiered” standard aroused suspicion in some quarters because it came after the Department of Energy’s (DOE’s) repository performance characterization showed long-term individual exposure levels at the compliance point well above the 10,000-year 15-millirem standard and appeared to “curve-fit” around DOE’s projection. Nevertheless, I concur with the authors that the EPA’s proposal has merit for initiating an objective discussion. Further, any methodology developed for these very long time scales should be robust enough to apply to other hazardous waste disposal challenges.

The article is less convincing when discussing “other risks,” most of which are irrelevant to a nuclear waste licensing procedure. It is also selective, mentioning coal ash but not uranium tails, for example, and not discussing the risks of nuclear proliferation. This is much more relevant than the other risks discussed (except climate change) in light of global discussions about reprocessing, which links waste management mitigation with the separation of weapons-usable plutonium (and possibly other actinides) from the waste stream.

Moving from risks to benefits, “carbon-free” nuclear power would clearly benefit from regulations that accommodate climate change risks. However, the authors argue that the benefits equation should also weigh YM sunk costs and the ongoing costs of the government’s failure to move spent fuel. This is a serious mistake: It could undermine the core principle that YM licensing should be based on rigorous scientific and technical evaluation to develop standards that protect public health. Also, the federal liability for not moving spent fuel hinges on federal ownership, regardless of where it is stored, not directly on YM operation.

Finally, the article’s focus may lead some readers to infer that the standard beyond 10,000 years is the principal obstacle to YM operation. In my view, meeting the 10,000-year standard will be at least as challenging. Compliance depends on near-perfect integrity of the engineered barriers for thousands of years, with little empirical evidence to support this conclusion.

The authors correctly note the scientific consensus about the effectiveness of deep geologic disposal, primarily because deep underground environments change very slowly. This general consensus does not, however, apply to any specific site, for which judgments must be based on extensive site-specific characterization, measurement, and modeling. Further, it is arguable whether the YM storage site qualifies as “deep underground”: Waste emplacement above the water table implies considerable dependence on the surface environment (for example, a wetter surface environment could develop on a scale far short of geological times). Objective evaluation of these and other scientific and technical issues should be the only guide in the licensing process.

ERNEST J. MONIZ

Cecil and Ida Green Professor of Physics and Engineering Systems

Massachusetts Institute of Technology

Cambridge, Massachusetts

Ernest J. Moniz is a former undersecretary of the U.S. Department of Energy.


Per F. Peterson, William E. Kastenberg, and Michael Corradini support the new Environmental Protection Agency standard for nuclear waste disposal at Yucca Mountain, Nevada. They argue strongly for developing a repository for high-level nuclear waste there, in part relying on the $8 billion already invested in Yucca characterization as a justification for going forward (a specious argument, to be sure).

A number of the authors’ arguments show a basic lack of understanding of geology and the geologic issues associated with nuclear waste disposal. The disposal of nuclear waste in a mined repository and the prediction of repository performance over time are heavily dependent on a solid understanding of the geology and evolution of the Earth system over time.

The authors correctly identify the strong consensus on geologic repositories as a solution to the disposal of nuclear waste. I agree with them wholeheartedly on this issue. But not all regions are created equal, and some sites are simply not suitable for geologic disposal. I would argue that the jury is still out on Yucca Mountain.

The authors argue that “Environments deep underground change extremely slowly with time …and therefore their past behavior can be studied and extrapolated into the long-term future.” If only it were so. The only difference between underground and surface environments is that the former are not subject to the processes of erosion, which affect the area on a short-term basis. An underground area is still subjected to tectonic processes such as volcanism and seismicity, both of which occur at Yucca Mountain. Simply locating a site underground does not provide a higher guarantee of predictability over time.

The authors’ faith in the results of the Department of Energy’s (DOE’s) performance assessment model of Yucca Mountain further indicates their lack of understanding of Earth systems. This model is highly uncertain and cannot be validated and verified (although DOE claims that it has done so). Thermodynamics teaches us that in an open system, which a geologic repository is, we cannot know all the input parameters, processes, and boundary conditions that might affect the system over time. To begin with, complete kinetic and thermodynamic data sets of various phenomena do not exist as inputs into the model. Therefore, the results of such models cannot be used as a source of real, reliable information. For instance, the model suggests that “peak risk occurs in about 60,000 years,” but that is a highly uncertain number.

The authors claim that “YM would have the capacity to store all the waste from the nuclear electricity generation needed to power the country for centuries.” What does this mean? What type of energy future do they imagine, and how much waste will be produced? There are, in fact, geologic limits on the capacity of a repository at Yucca, including the locations of faults and fractures, the extent of repository lithology, and the extent of the low water table.

It will be possible to solve the problem of nuclear waste using geologic repositories. Care must be taken to select an appropriate location, based on detailed geologic understanding and analysis, not simply because it is politically expedient to do so.

ALLISON MACFARLANE

George Mason University

Fairfax, Virginia


Per F. Peterson, William E. Kastenberg, and Michael Corradini present the case that certain Environmental Protection Agency (EPA)–proposed standards for Yucca Mountain are reasonable and suggest that a similar approach should be applied to the management of long-lived hazardous waste, the use of fossil fuels, and other human activities. I have no disagreement with the authors’ various technical assertions, but I am skeptical of the authors’ recommendation.

They correctly note that the proposed standards for Yucca Mountain are far more stringent than those governing other societal activities. Although the proposed EPA standards for Yucca Mountain would require a demonstration of compliance with a dose limit through the time of peak risk (several hundred thousand years in the future), the time horizon for the evaluation of other societal risks, such as those from disposal sites for chemical wastes, is far shorter, if such risks are evaluated at all. Although I do not dispute as a conceptual matter the authors’ argument that all risks should be evaluated and weighed on a common basis, the practical and political realities push in a different direction. Our society views nuclear risks as different from the risks arising from other activities. Although experts may disagree, the safety restrictions on nuclear activities no doubt will remain much more stringent than those placed on other activities posing equivalent or greater risk.

Even more important, it is doubtful to me that society could readily apply the authors’ suggestion that we should apply limits like those being proposed for Yucca Mountain to human activities more generally.As noted by other authors, the proposed EPA standards establish radiation limits for periods far into the future that are below background levels in many parts of the world. The authors correctly observe that although there may be a theoretical risk from doses at those levels, no detectable incremental risk has in fact been observed. The application of such a stringent standard to human activities more broadly would require widespread change.

To take the authors’ example, in order to limit the risks arising from increased CO2 concentrations in the atmosphere, such a standard might require shutting down much of our fossil-based electrical generation (coal fuels 54% of U.S. electrical generation) and imposing drastic restrictions on the use of petroleum-fueled automobiles or trucks. Similarly, the use of many natural resources (such as water and natural gas) more quickly than they can be replenished presents risks to future generations, if only from the denial of supply, that might exceed such a standard. The reduction of all possible future hazards to the low levels required by the Yucca Mountain standards would likely require radical alteration of current societal activity.

I agree that we should seek to undertake the long-term evaluation of risks and should strive for an appropriate balance of risks and benefits, but I doubt that the authors’ recommendation for the widespread limitation of risk to the level that would be established by the proposed Yucca Mountain standards can be accomplished without drastic reordering of societal priorities.

RICHARD A. MESERVE

President

Carnegie Institution of Washington

Washington, DC

Richard A. Meserve is a former chair of the Nuclear Regulatory Commission.


Combating pandemics

Henry I. Miller’s concerns about our ability to prepare for and respond to a potential flu pandemic encompass many salient points (“DEE-FENSE! DEE-FENSE!: Preparing for Pandemic Flu,” Issues, Summer 2006). The degraded U.S. vaccine production capacity and R&D are indeed critical issues. There is no doubt that vaccines are essential public health tools; however, other issues surrounding avian influenza may be more important to protecting our well-being.

We do not know whether highly pathogenic H5N1 avian influenza (H5N1 HPAI) virus will become a human pandemic, but we do know that it is an avian pandemic right now. Imagine that we faced a toxin in a city’s water. We would not focus on stockpiling medications for people who might become ill. We would instead begin removing the toxin and also ramp up treatment capacities, just in case. Unfortunately, the national pandemic plan focuses almost exclusively on stockpiling human pharmaceuticals, doing little to decrease actual risks. It focuses roughly 86% of a $7.1 billion budget on human vaccines and therapeutics, 4% on surveillance, and 9% on state and local health preparedness. Evidently, knowing where a disease is and having localities prepared for outbreaks are only one-eighth as important as a questionable plan for stockpiling unproven vaccines! Worse yet, funding for the U.S. Department of Agriculture’s poultry protection work is less than 0.1% of the national plan’s budget.

AVIAN INFLUENZA, TO DATE, HAS CAUSED MORE HUMAN MORBIDITY AND MORTALITY BY KILLING POULTRY, AND THEREBY RUINING LIVELIHOODS AND INCREASING MALNUTRITION, THAN BY DIRECTLY KILLING PEOPLE.

H5N1 HPAI has caused over 4,000 outbreaks in some 56 countries across Asia, Africa, and Europe. It has killed 141 people, while hundreds of millions of domestic birds have died or were destroyed in control attempts. Beyond the animal welfare and environmental concerns raised by this catastrophe, those birds represented the primary quality protein source, and between 10 to 25% of family income, for the millions of people that keep backyard poultry in affected areas. Hence, avian influenza, to date, has caused more human morbidity and mortality by exterminating poultry, and therefore ruining livelihoods and increasing malnutrition, than by directly killing people.

Further, should today’s H5N1 HPAI reach North America, it’s unlikely to get established in modern biosecure poultry farms. Few Americans handle high-risk birds, and cooking destroys the virus, so human exposures will be low. However, it will destroy markets, devastating peoples’ lives and destroying rural economies. For example, when H5N1 HPAI hit Europe last fall, poultry markets plummeted and are still depressed by 10 to 70%. Additionally, a January 2006 Harvard survey found that 71% of respondents would stop or severely cut back on poultry purchases if H5N1 HPAI came to our shores, even if no citizens were infected. Hence, even without increased human infectivity (and so, with little need for human vaccines), this virus could severely damage our country.

Because “bird flu” is now a bird disease, isn’t keeping it out of our birds, before it gets to people, a better way to protect humans? Our current national plan could leave us with economic disaster and potential food insecurity. But we might eventually have plenty of unused vaccines in the fridge. A more balanced threat response plan is warranted.

BARRETT D. SLENNING

Animal Biosecurity Risk Management Group

North Carolina State University

Raleigh, North Carolina


Electric reliability

Starting with an unexpected premise—that the United States ranks toward the bottom among developed nations in terms of the reliability of its electricity service—the three leaders of Carnegie Mellon’s Electricity Industry Center lay out a compelling case for looking to the experience of other industries for ways to improve in the United States (Jay Apt, Lester B. Lave, and M. Granger Morgan, “Power Play: A More Reliable U.S. Electric System,” Issues, Summer 2006).

They observe that although the new Electricity Reliability Organization (ERO), authorized in the Energy Policy Act of 2005, will be an important gesture to boost reliability, unless the ERO is encouraged by its federal regulators to do more than merely lock in place the status quo, it will be unlikely to do what is needed.

Specifically, the authors instruct us by shining a light on the experience of the U.S. nuclear industry in the post–Three Mile Island world in imposing on itself, through the establishment of the Institute of Nuclear Power Operators (INPO), a rigorous and metrics-driven commitment to promote excellence in both the safety and reliability of nuclear power plants. At the core of the commitment of the senior executives of power companies that make up INPO’s board is a recognition that “all nuclear utilities are affected by the action of any one utility.”

What’s striking about the authors’ message is that it challenges the industry to think and learn outside the box. For one thing, for the past decade, the conventional wisdom has held that the most important thing needed to improve electric system reliability was the passage by Congress of mandatory reliability standards, along with the establishment of an ERO. The authors say that unless the industry does a lot more than encode and enforce today’s standards, the system will continue to underperform. The authors warn that the new ERO, likely to spring from the well-established industry organization the North American Electric Reliability Council, will rely on industry consensus, rather than excellence, as the basis for setting reliability standards for the industry.

INPO, for example, has found that excellence in the reliable and safe operation of nuclear plants can be achieved best by combining performance objectives measured by metrics and requirements adopted by government regulators. As the authors say, “Industrywide performance objectives are difficult to meet every year, but provide goals and measurable outcomes; the [Nuclear Regulatory Commission] regulations provide a minimum floor for operations.”

The most instructive observations in the article are those that call for federal regulators to require the ERO to periodically review all standards and to modify its guidelines for investigating reliability events so that they stress human factors and corporate support of operational personnel; to impose on the ERO and the industry requirements for creating, collecting, and publishing more transparent reliability metrics; and to institute a “best-practices” organization outside of the ERO’s standards and compliance organization. Most important is the authors’ general warning that “the ERO will fail to improve reliability significantly unless generators, transmission and distribution owners, and equipment makers are convinced that they face large penalties for substandard performance.”

In today’s increasing service economy in the United States, we cannot afford to have second-best electric system reliability. Now that we’ve enacted the legal underpinnings for mandating improved reliability, we need to push for excellence. Apt, Lave, and Morgan have given us a useful set of instructions for getting there.

SUSAN F. TIERNEY

Managing Principal

Analysis Group

Boston, Massachusetts

Susan F. Tierney is a former Assistant Secretary for Policy at the U.S. Department of Energy.


The educated engineer

As the executive director of the Accreditation Board for Engineering and Technology (ABET), the organization responsible for ensuring the quality of postsecondary engineering programs, I am troubled by the premise of “Let Engineers Go to College,” by C. Judson King (Issues, Summer 2006). King repeatedly references the “narrowness” of undergraduate engineering education and uses this purported narrowness as support for his argument that the master’s rather than the bachelor’s be the first professional degree in engineering. Although I am not speaking for what should be the first professional degree, I am speaking against the premise as outlined in this article. I am afraid there is a significant disconnect between today’s actual undergraduate curriculum and King’s perception of it.

Those familiar with the evolution of ABET’s accreditation criteria will recognize King’s premise as the familiar—and then-warranted—war call of the 1980s and early 1990s. Today’s curriculum, however, includes the very elements that King describes as lacking. In fact, I will respond to King’s statements by quoting directly from ABET’s Criteria for Accrediting Engineering Programs.

King writes that engineers “must now look outward and interact directly with non-engineers”; ABET’s criteria for students include “an ability to function on multi-disciplinary teams.”

King writes that engineers “must understand and deal with other countries and other cultures…understand society and the human condition…[and have] exposure to a variety of outlooks and ways of thinking.” ABET’s criteria call for “the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context…[and] a knowledge of contemporary issues.”

King calls for “thinking and writing skills in a variety of contexts”; ABET calls for “an ability to design a system, component, or process to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability…an ability to communicate effectively.”

King sees a need for “the wherewithal for flexibility and movement”; ABET recommends “a recognition of the need for, and an ability to engage in life-long learning.”

ABET-accredited engineering programs are expected to have significant involvement with their constituents, especially when developing and evaluating formal learning outcomes and professional objectives for their graduates. Those constituents can range from parents and alumni to graduate schools and employers. In short, if employers of a program’s graduates want them educated to have the “flexibility to move into non-engineering areas or management,” then the program must take that into consideration when developing its curriculum.

King states that “The environment for engineers and the nature of engineering careers in the United States are changing in fundamental ways.” Engineering education has also changed significantly in recent years in response to the needs of society and employers and the wishes of the students themselves. What are these changes? ABET commissioned the Penn State’s Center for the Study of Higher Education to determine whether the engineering graduates of 2004 are better prepared than those of a decade ago. The answer was a resounding “yes”!

Whether the undergraduate degree will remain the first professional degree is a matter for ABET’s member societies to decide. Regardless of the outcome of that decision, however, I would argue that today’s undergraduate engineering curriculum is richer and more diverse than perhaps it’s ever been. It is also more flexible and allows educators and administrators to be innovative in achieving their unique programmatic and institutional goals.

GEORGE D. PETERSON

Executive Director

ABET

Baltimore, Maryland


In his excellent article, C. Judson King raises a number of issues that are extremely pertinent to the future of engineering education and practice. The notion of a broadly trained engineer, or a “renaissance engineer” with substantial international experience, is more relevant than ever in today’s world for two practical reasons (beyond the already compelling rationale that exposure to a variety of subjects is beneficial for the students’ own personal development):

(1) Meeting the enormous challenges, especially those pertaining to human health and the environment, facing peoples and societies worldwide will require the attention and skills of engineers. They, in turn, must have a good understanding of these problems and of the interrelationships between technological change and societal development; and students with undergraduate degrees in engineering must make their way into a number of professions, ranging from finance to medicine to law to management consulting to public service, where a broad perspective is very valuable.

But at the same time, engineering subjects are sufficiently complicated that it is impossible to gain mastery of an area in four years. Thus a Master’s in Engineering (M.Eng.) makes eminent sense as a way of giving students in-depth training in specific areas, including engineering approaches such as design. Although many schools already have separate M.Eng. or five-year combined bachelor’s and master’s programs, there needs to be a broader and more formal recognition (and accreditation) of a M.Eng. track, parallel to other professional disciplines.

We also believe that the engineering education profession has been generally remiss in ensuring that students get broad training at an undergraduate level, although many schools have made efforts in this regard. At Harvard, we offer a Bachelor of Arts (A.B.) track in addition to an Accreditation Board for Engineering and Technology–accredited Bachelor of Science (S.B.) in Engineering Sciences. The former option provides a foundation in engineering, but with a much broader range of choices outside of engineering, and the latter gives students a strong base in engineering fundamentals, with an opportunity to focus on specific areas. Such degree tracks, we believe, provide a reasonable combination of core competence and flexibility, but often there is tension between these two goals.

Furthermore, engineering students throughout the country rarely receive sufficient exposure to the interactions between technology and society: the ways in which technology can, and must, make a positive societal impact (in meeting, for example, the enormous challenge of climate change), and the ways in which societal concerns shape the development of technology (genetically modified crops being a prime example). We are in the process of developing courses that will further broaden our engineering curriculum and ultimately lead to a major in Technology and Society. Many other schools are also moving in the same direction. Such new options will give students some grounding in these kinds of issues in addition to traditional engineering skills.

(2) We also very much agree with King that engineering must become an integral part of the general education curriculum in universities. In today’s world, it is as important for students to learn about engineering and technology as it is to study history, literature, and the fine arts. Once again, we are taking steps at Harvard in this direction by offering courses that seek to transmit the philosophy, excitement, and poetry of engineering to a broader student population.

In the end, we believe that we as educators will be more successful, and engineering practice and research will be richer, if we explore such options that recognize not only the evolving nature of engineering and its place in the world but also the varied and changing needs of students. Designing products to meet specific needs under multiple constraints is the hallmark of a good engineer. Should we not be doing the same in engineering education?

VENKATESH NARAYANAMURTI

HOWARD STONE

MARIE DAHLEH

AMBUJ SAGAR

Division of Engineering and Applied Sciences

Harvard University

Cambridge, Massachusetts


Congratulations to C. Judson King for writing such a provocative article! The points are well founded and in my opinion correct.

The American Society of Civil Engineers has been working for the past 10 years to raise engineering educational expectations in the future. It has been slow going, but we are making progress.

Engineers are marching to the tune of irrelevance in the 21st century. We argue and debate details that miss the larger context and picture. We seem to have lost sight of thinking about and serving society at large. We seemed to be destined to be technicians, happy to allow the corporate world to tell us what to do and when to do it. We are not participating in substantive issues affecting society, such as living in a world that has finite resources. When are we going to start letting our voices be heard regarding energy and the environment? We are doing irreversible damage to the globe while at the same time depleting our natural resources. Why can’t we be active participants in shaping tomorrow, versus reactive agents doing what we are told?

We are at a critical juncture in engineering education in terms of content, scope, and length. The question is, will we settle for what we have or reach for what we can and should be? Time will tell.

JEFFREY S. RUSSELL

Professor and Chair

Department of Civil and Environmental Engineering

University of Wisconsin–Madison


C. Judson King lays out the case for liberalization of the engineering curriculum in order to produce future engineers who are able to address the challenges of the 21st century. I agree with him. But I also think that engineering education should reenvision itself as a “service” discipline. King implies this possibility when he notes, “[t]he bachelor’s curriculum should provide enough variety that a graduate would also be well prepared for careers other than engineering.”

In my view, we need to go further. We need engineering classes expressly designed to be taken by non-majors. Such an approach serves two purposes: First, it increases awareness of engineering within the general population. Such awareness would be immensely valuable in a citizenry facing, as King notes, pervasive technologies and technological choices in virtually every aspect of their lives. Second, such an approach might serve to increase interest in engineering as a career field. For example, under the leadership of then-dean Ioannis Miaoulis, the Tufts University College of Engineering saw a net increase in the number of engineering majors after it began offering creative introductory courses. Miaoulis himself taught thermodynamics via a cooking class.

There are additional collateral benefits to be achieved. Once we have college-level engineering courses designed for non-majors, it should be easier to design pre-college “engineering” courses beyond the relatively few that currently exist. This will further increase awareness about engineers and what they do; provide practical frameworks within to teach science, mathematics, and technology; and possibly increase the number of students interested in pursuing engineering as a career field.

Transitioning to such a regime will not be easy. There are significant pressures that militate against it, not least of which will be considerations of faculty workload. I assert that it is in the enlightened self-interest of the engineering profession to surmount these pressures. After all, the creed of the professional engineer is to dedicate their professional knowledge and skill to the advancement and betterment of human welfare. Increasing awareness of the engineering profession is an underutilized means of achieving that end.

NORMAN L. FORTENBERRY

Director

Center for the Advancement of Scholarship on Engineering Education

National Academy of Engineering

Washington, DC